Skip to main content
Explicit Representation, Controlled Reasoning, and the Limits of Language‑Centric AI
Simone Mazzoni — RCUBEAI Research Lab (Paris, France) — December 2025

Abstract

Large Language Models (LLMs) are powerful but remain language‑centric and rely on implicit representations learned from data. This creates well-known limitations in regulated or safety‑critical contexts: opacity, limited auditability, and difficulty enforcing invariants. This paper introduces Large Representation Models (LRMs): AI systems whose primary object of computation is explicit internal representations. LRMs:
  • decouple language from reasoning,
  • separate validated knowledge from exploratory computation,
  • enforce structural invariants and closure constraints,
  • organize cognition across hierarchical representational Orders,
  • and support controlled representational growth (promotion and scaling).

Executive summary

Core shift

Intelligence is treated as representation construction + validation, not token continuation.

Governance

Explicit constraints, thresholds, and invariants enable auditability and controlled reasoning.

Scaling path

Complexity is absorbed via vertical structure (Orders), not only parameter count.

1. Why LRMs

LLMs treat language as both:
  1. the primary input modality, and
  2. the internal substrate of reasoning.
This architectural flattening makes it difficult to:
  • preserve hard constraints,
  • expose validity thresholds,
  • prevent competence collapse (hallucination as structural failure),
  • and certify behavior under domain rules.
LRMs propose a structural redesign: explicit, hierarchical representational computation.

2. Representation‑centric intelligence

2.1 Language‑centric vs representation‑centric

An LRM does not ask:
“What is the most probable continuation of this text?”
It asks:
“What representation of the domain is consistent, stable, and valid under applicable constraints?”

2.2 Orders and hierarchical decomposition

Problems are decomposed across Orders of representation:
  • lower Orders: concrete facts, syntax, local relations,
  • higher Orders: abstractions, meta‑constraints, cross‑domain coherence.
This reduces category errors (e.g., resolving normative conflicts with statistical heuristics).

3. Four cognitive modes

Validated, stationary representations available at a moment in time.
In LRMs, knowledge is explicitly represented and auditable.

4. Implicit vs explicit representation

LLMs (implicit)

  • reasoning occurs in a learned embedding geometry,
  • constraints are distributed across parameters,
  • thresholds and invariants are not isolated or controllable.

LRMs (explicit)

  • constraints and thresholds are represented directly,
  • knowledge is separated from exploration,
  • propositions are evaluated against declared invariants,
  • outputs are “committed representational objects” with traceable validity conditions.

5. Competence stabilization and hallucination prevention

LLM failure modes can be explained by flat architecture:
  • higher-level inference can override lower-level constraints silently.
LRMs enforce competence stabilization:
  • knowledge validated at an Order becomes stationary at that Order,
  • for higher Orders, that stabilized structure becomes competence (non‑negotiable constraints),
  • contradictions become explicit non‑closure instead of silent violations.

6. Orders as computational spaces (intrinsic multi‑agenticity)

In LRMs, an Order is a coherent computational space characterized by:
  • admissible representational objects,
  • admissible operations and transformations,
  • validity constraints and closure conditions,
  • resource bounds.
Because each Order has its own constraints and dynamics, the architecture is intrinsically multi‑agentic (without heuristic tool-calling): each Order acts as a specialized reasoning unit with explicit interfaces to adjacent Orders.

7. Vertical dynamics: promotion vs scaling

7.1 Two causes of non‑closure

  • Abstraction deficit: representation is not expressive enough.
  • Precision deficit: representation is expressive enough but under‑resolved (wide intervals, missing fine constraints, boundary conditions).

7.2 Two vertical operators

  • Promotion (Vertical Ascension) resolves abstraction deficit by introducing a higher Order.
  • Scaling (Vertical Descension) resolves precision deficit by decomposing into sub‑orders (refinement spaces).
Scaling creates a resolution tree: closure may require descending to refine local constraints, then reintegrating upward as stabilized competence.

8. Knowledge vs competence

Knowledge: what is validated and operational within an Order.
Competence: what is inherited by an Order from lower Orders, functioning as fixed constraints.
The same structure can be:
  • knowledge at the Order where it is validated,
  • competence for higher Orders.

9. Canonical definition (formal)

An LRM is an artificial cognitive system whose primary computation operates over explicit, structured, hierarchical representational spaces. Formal tuple:
L = (O, {R_N}, {C_N}, {I_N}, D)
Where:
  • O: partially ordered set of Orders (indexed by N)
  • R_N: representational space at Order N
  • C_N: constraints, invariants, admissible operations governing R_N
  • I_N: interfaces linking R_N to adjacent Orders and external modalities
  • D: dynamical operators (reasoning, learning, promotion, scaling)

10. Positioning LRMs in the AI landscape


11. Architectural instantiation (per Order)

Within each Order N, three components interact:
  • Substrate Φₙ: stabilized representations (knowledge + inherited competence)
  • Projection space Ψₙ: dynamic exploration and reasoning
  • Interface Wₙ: observation, closure evaluation, representational commit

Wₙ as observer and closing gate

Wₙ:
  • evaluates candidate structures from Ψₙ against Φₙ,
  • decides closure under constraints of the Order,
  • commits validated representational objects to Φₙ,
  • emits structured non‑closure when closure cannot be achieved.

12–13. Orchestration and multi‑clock computation

LRMs do not assume a single global execution timeline:
  • each Order has an Order‑relative “clock” (its own progression regime),
  • coordination is achieved via scoped synchronization windows and consistency conditions,
  • closure decisions remain local to Wₙ (not a central scheduler).
In mature LRMs, orchestration is presented as an emergent property of representational laws and hierarchical constraints, though early implementations may use explicit orchestration layers pragmatically.

14. LRMs and safe AGI (glass‑box requirement)

LRMs are positioned as a candidate foundation for safe AGI because they:
  • make reasoning and representational change traceable,
  • enforce constraints hierarchically (constraints cannot be bypassed by higher‑level reasoning),
  • treat non‑closure as a signal for governed evolution rather than forced outputs.

15–16. Scope, grounding, and roadmap

This is an architectural definition paper:
  • it avoids internal algorithmic implementations and engineering details,
  • it states theoretical grounding in the broader R3 program, while remaining evaluable independently at the architecture level,
  • it reports a completed foundational validation phase and outlines a roadmap toward mature LRM properties (including autonomous promotion/scaling and intrinsic orchestration).

17. Conclusion

LRMs reorganize artificial cognition around:
  • explicit, hierarchical representation,
  • controlled reasoning with competence preservation,
  • governed evolution driven by non‑closure,
  • and glass‑box auditability.

Suggested citation

Mazzoni, S. (2025). Large Representation Models (LRM): Explicit Representation, Controlled Reasoning, and the Limits of Language-Centric AI. RCUBEAI Research Lab.