Simone Mazzoni — RCUBEAI Research Lab (Paris, France) — December 2025
Abstract
Large Language Models (LLMs) are powerful but remain language‑centric and rely on implicit representations learned from data. This creates well-known limitations in regulated or safety‑critical contexts: opacity, limited auditability, and difficulty enforcing invariants. This paper introduces Large Representation Models (LRMs): AI systems whose primary object of computation is explicit internal representations. LRMs:- decouple language from reasoning,
- separate validated knowledge from exploratory computation,
- enforce structural invariants and closure constraints,
- organize cognition across hierarchical representational Orders,
- and support controlled representational growth (promotion and scaling).
Executive summary
Core shift
Intelligence is treated as representation construction + validation, not token continuation.
Governance
Explicit constraints, thresholds, and invariants enable auditability and controlled reasoning.
Scaling path
Complexity is absorbed via vertical structure (Orders), not only parameter count.
1. Why LRMs
LLMs treat language as both:- the primary input modality, and
- the internal substrate of reasoning.
- preserve hard constraints,
- expose validity thresholds,
- prevent competence collapse (hallucination as structural failure),
- and certify behavior under domain rules.
2. Representation‑centric intelligence
2.1 Language‑centric vs representation‑centric
An LRM does not ask:“What is the most probable continuation of this text?”It asks:
“What representation of the domain is consistent, stable, and valid under applicable constraints?”
2.2 Orders and hierarchical decomposition
Problems are decomposed across Orders of representation:- lower Orders: concrete facts, syntax, local relations,
- higher Orders: abstractions, meta‑constraints, cross‑domain coherence.
3. Four cognitive modes
- Knowledge
- Learning
- Reasoning
- Evolution
Validated, stationary representations available at a moment in time.
In LRMs, knowledge is explicitly represented and auditable.
In LRMs, knowledge is explicitly represented and auditable.
4. Implicit vs explicit representation
LLMs (implicit)
- reasoning occurs in a learned embedding geometry,
- constraints are distributed across parameters,
- thresholds and invariants are not isolated or controllable.
LRMs (explicit)
- constraints and thresholds are represented directly,
- knowledge is separated from exploration,
- propositions are evaluated against declared invariants,
- outputs are “committed representational objects” with traceable validity conditions.
5. Competence stabilization and hallucination prevention
LLM failure modes can be explained by flat architecture:- higher-level inference can override lower-level constraints silently.
- knowledge validated at an Order becomes stationary at that Order,
- for higher Orders, that stabilized structure becomes competence (non‑negotiable constraints),
- contradictions become explicit non‑closure instead of silent violations.
6. Orders as computational spaces (intrinsic multi‑agenticity)
In LRMs, an Order is a coherent computational space characterized by:- admissible representational objects,
- admissible operations and transformations,
- validity constraints and closure conditions,
- resource bounds.
7. Vertical dynamics: promotion vs scaling
7.1 Two causes of non‑closure
- Abstraction deficit: representation is not expressive enough.
- Precision deficit: representation is expressive enough but under‑resolved (wide intervals, missing fine constraints, boundary conditions).
7.2 Two vertical operators
- Promotion (Vertical Ascension) resolves abstraction deficit by introducing a higher Order.
- Scaling (Vertical Descension) resolves precision deficit by decomposing into sub‑orders (refinement spaces).
8. Knowledge vs competence
Knowledge: what is validated and operational within an Order.Competence: what is inherited by an Order from lower Orders, functioning as fixed constraints. The same structure can be:
- knowledge at the Order where it is validated,
- competence for higher Orders.
9. Canonical definition (formal)
An LRM is an artificial cognitive system whose primary computation operates over explicit, structured, hierarchical representational spaces. Formal tuple:- O: partially ordered set of Orders (indexed by N)
- R_N: representational space at Order N
- C_N: constraints, invariants, admissible operations governing R_N
- I_N: interfaces linking R_N to adjacent Orders and external modalities
- D: dynamical operators (reasoning, learning, promotion, scaling)
10. Positioning LRMs in the AI landscape
11. Architectural instantiation (per Order)
Within each Order N, three components interact:- Substrate Φₙ: stabilized representations (knowledge + inherited competence)
- Projection space Ψₙ: dynamic exploration and reasoning
- Interface Wₙ: observation, closure evaluation, representational commit
Wₙ as observer and closing gate
Wₙ:- evaluates candidate structures from Ψₙ against Φₙ,
- decides closure under constraints of the Order,
- commits validated representational objects to Φₙ,
- emits structured non‑closure when closure cannot be achieved.
12–13. Orchestration and multi‑clock computation
LRMs do not assume a single global execution timeline:- each Order has an Order‑relative “clock” (its own progression regime),
- coordination is achieved via scoped synchronization windows and consistency conditions,
- closure decisions remain local to Wₙ (not a central scheduler).
14. LRMs and safe AGI (glass‑box requirement)
LRMs are positioned as a candidate foundation for safe AGI because they:- make reasoning and representational change traceable,
- enforce constraints hierarchically (constraints cannot be bypassed by higher‑level reasoning),
- treat non‑closure as a signal for governed evolution rather than forced outputs.
15–16. Scope, grounding, and roadmap
This is an architectural definition paper:- it avoids internal algorithmic implementations and engineering details,
- it states theoretical grounding in the broader R3 program, while remaining evaluable independently at the architecture level,
- it reports a completed foundational validation phase and outlines a roadmap toward mature LRM properties (including autonomous promotion/scaling and intrinsic orchestration).
17. Conclusion
LRMs reorganize artificial cognition around:- explicit, hierarchical representation,
- controlled reasoning with competence preservation,
- governed evolution driven by non‑closure,
- and glass‑box auditability.