Recent advances in Artificial Intelligence have been driven by large-scale statistical models
operating over implicit representational spaces, with Large Language Models (LLMs) as the most
prominent example. While such systems demonstrate impressive capabilities across a range of
tasks, their reliance on implicit representations and probabilistic continuation limits transparency,
auditability, and control in safety-critical or regulated domains.This paper introduces Representation Models (RMs), a new class of AI systems in which
explicit internal representations constitute the primary computational substrate. In an RM,
reasoning, learning, and decision-making proceed through the construction, transformation, and
stabilization of representations under declared constraints, rather than through modality-specific
prediction alone. RMs separate validated knowledge from exploratory computation, enforce
structural invariants, and organize computation across hierarchical representational orders.We provide a precise architectural definition of RMs, clarify the distinction between knowledge,
learning, reasoning, and evolution, and position RMs relative to language models, world models,
and symbolic systems. This document serves as the foundational public introduction of the
RM paradigm, focusing on architectural principles rather than performance benchmarks, and
establishing the conceptual framework required for subsequent instantiations and validation
phases.