Representation Models (RM)
Overview
Representation Models (RMs) define a new architectural class of artificial intelligence systems whose primary object of computation is explicit internal representation, rather than language or implicit statistical state. An RM is designed to reason over structured, hierarchical representational spaces, to enforce validity and closure explicitly, and to support controlled growth of representational capacity without competence collapse. This page is a presentation of the **RM Foundation **publication. It preserves the architectural definitions and conceptual scope of the paper while omitting confidential implementation details.From language-centric to representation-centric intelligence
Contemporary AI systems—particularly Large Language Models (LLMs)—operate within a language-centric paradigm. Language simultaneously serves as:- the interface to the world,
- the internal representational substrate,
- and the medium of reasoning.
Reasoning is no longer performed over language tokens, but over explicit representational objects governed by declared constraints.Language remains essential, but it becomes one modality among others, not the authority of truth.
Representation-centric reasoning
In an RM, intelligence is defined by the ability to construct, transform, and validate representations under constraint. An RM does not ask:What is the most probable continuation of this text?It asks:
Which representation of the problem domain is valid, coherent, and closed under the applicable constraints?This distinction enables auditability, scope control, and principled refusal.
Orders as coherent computational spaces
The core organizing principle of RMs is the Order. An Order is not a layer or abstraction level. It is a coherent computational space defined by:- a class of admissible representational objects,
- admissible operations and transformations,
- validity constraints and closure conditions,
- bounded computational resources.
Knowledge, learning, reasoning, and evolution
RMs make a strict separation between four processes that are often conflated:- Knowledge — validated, stationary representations that have achieved closure within an Order.
- Learning — integration of validated structures into the representational substrate, aligning new content with existing competence.
- Reasoning — linear computation performed within a stabilized Order, deriving consequences under existing constraints.
- Evolution — structural regime change triggered when persistent non-closure cannot be resolved within the current Order.
Non-closure as a first-class signal
When closure cannot be achieved within an Order, the system does not approximate or guess. Instead, non-closure is surfaced explicitly as a structural state. It indicates that the current representational regime is insufficient under the active constraints. Non-closure is the driver of controlled representational growth. The architectural response to non-closure—whether through refinement within the current regime or transition to a new one—is governed by explicit constraints and is implementation-dependent.Competence stabilization and hallucination prevention
A defining property of RMs is competence preservation. Once knowledge stabilizes at a lower Order, it becomes non-negotiable competence for higher Orders. Higher-level reasoning may contextualize or constrain lower-level knowledge, but it cannot overwrite it. This architecture structurally prevents hallucination, understood as competence collapse under higher-level reasoning.Representational adaptation
RMs support controlled expansion of representational capacity through governed adaptation mechanisms. When non-closure persists, the system may adapt its representational regime—either by increasing resolution within the current regime or by transitioning to a new one with expanded capacity. Both forms of adaptation are governed by explicit closure criteria. Specific adaptation operators are implementation-dependent and are not prescribed by the foundational RM definition.Canonical definition
Formally, a Representation Model can be characterized as:L = (O, , , , D)Where:
- O is a partially ordered set of Orders;
- Rₙ are the representational spaces associated with each Order;
- Cₙ are the validity constraints and invariants governing each Order;
- Iₙ are interfaces linking Orders and external modalities;
- D are the dynamical operators governing representational transformation and adaptation within and across Orders.
Scope and intent
This foundation document:- defines what an RM is;
- clarifies which architectural properties distinguish it from existing paradigms;
- establishes a conceptual baseline for validation and instantiation.
Reference publication
Representation Models (RM): Explicit Representation, Controlled Reasoning, and beyond Language-Centric AI Simone Mazzoni, January 2026- DOI: 10.5281/zenodo.18288540
- Citation: Mazzoni, S. (2026). Representation Models (RM): Explicit Representation, Controlled Reasoning, and beyond Language-Centric AI.
This page is the web-accessible counterpart of the RM Foundation paper and should be read as an architectural definition, not an implementation specification.
This page is a faithful web transcription and structural condensation of the foundational RM paper. It preserves the architectural definitions, scope boundaries, and non-claims of the original publication while omitting confidential implementation details.