This page enumerates the scientifically defensible claims associated with the Representation Model (RM) paradigm and its concrete instantiations at RCUBEAI.Each claim is explicitly linked to a validation phase, a scope of validity, and a set of non-claims.No claim on this page relies on speculation, extrapolation, or unpublished results.
On this page
Foundational claims
Core architectural assertions that define the RM paradigm.
Phase I claims
Structural competence validated in Phase I.
Phase II claims
Metacomputation and orchestration claims validated in Phase II.
Non-claims & status
What is explicitly not claimed, plus a status summary.
How to read this page
This structure is intentional. It ensures scientific clarity, regulatory readability, and protection against category errors.
Quick index
| Claim | Category | Validated in |
|---|---|---|
| RM-A1 — Representation-centric reasoning | Foundational | RM Foundation (architectural definition) + Phase I |
| RM-A2 — Hierarchical representational Orders | Foundational | RM Foundation + Phase I |
| RM-A3 — Non-closure as a first-class computational signal | Foundational | Phase 0 + Phase I |
| RM-P1 — Depth invariance | Phase I | Phase I — depth-invariance benchmark |
| RM-P2 — Competence decoupling from model scale | Phase I | Phase I — logical reasoning benchmarks |
| RM-P3 — Explicit handling of vagueness (Sorites) | Phase I | Phase 0 + Phase I (Sorites benchmarks) |
| RM-M1 — Metacomputational orchestration | Phase II | Phase II — metacomputational orchestration system |
| RM-M2 — Cost–safety frontier | Phase II | Phase II — security and consistency benchmarks |
| RM-M3 — Preservation of Phase I invariants under orchestration | Phase II | Phase II — metacomputational orchestration system |
Foundational architectural claims (RM paradigm)
These claims define the architectural commitments of the RM paradigm, independent of any single implementation.Claim RM-A1 — Representation-centric reasoning
Claim
Reasoning can be architecturally decoupled from language-centric generation and performed over explicit internal representations.Validated in
RM Foundation (architectural definition) + Phase IWhat is demonstrated
- Explicit representational objects can be isolated from generative language models.
- Reasoning authority can be externalized from probabilistic token prediction.
What is not claimed
- Replacement of language models.
- Universal superiority over all language tasks.
Claim RM-A2 — Hierarchical representational Orders
Claim
Cognition and reasoning can be organized into autonomous representational Orders, each governed by its own validity and closure conditions.Validated in
RM Foundation + Phase IWhat is demonstrated
- Orders function as coherent computational spaces.
- Lower-Order competence constrains higher-Order reasoning.
What is not claimed
- Psychological realism of Orders.
- One-to-one mapping to human cognitive stages.
Claim RM-A3 — Non-closure as a first-class computational signal
Claim
Failure to achieve closure within a representational Order can be detected explicitly and used as a structural signal rather than masked by probabilistic smoothing.Validated in
Phase 0 (early R3 prototype) + Phase IWhat is demonstrated
- Contradictions, paradoxes, and boundary failures are explicitly surfaced.
- Systems can refuse to answer rather than hallucinate.
What is not claimed
- That all non-closure cases are resolvable.
- That refusal implies correctness of alternative answers.
Phase I claims — Structural competence
These claims are validated on Phase I evaluation artifacts, focusing on structural competence and invariants.Claim RM-P1 — Depth invariance
Claim
Structural reasoning competence can be invariant to problem depth when represented explicitly.Validated in
Phase I — depth-invariance benchmarkWhat is demonstrated
- 100% accuracy across increasing depth (D=4–30).
- Absence of attention decay observed in probabilistic models.
What is not claimed
- Superiority on all deep reasoning tasks.
- General logical omnipotence.
Claim RM-P2 — Competence decoupling from model scale
Claim
Structural reasoning competence is not proportional to language-model parameter count.Validated in
Phase I — logical reasoning benchmarksWhat is demonstrated
- Small LLMs supervised by R3 outperform or match frontier models on 4/5 benchmarks.
- Order-of-magnitude cost reduction.
What is not claimed
- That small models are universally better.
- That scale is irrelevant for all tasks.
Claim RM-P3 — Explicit handling of vagueness (Sorites)
Claim
Vague predicates and Sorites-type paradoxes can be detected explicitly and handled through principled refusal or qualification.Validated in
Phase 0 (early R3 prototype) + Phase I (Sorites benchmarks)What is demonstrated
- Explicit structural classification of Sorites cases.
- Stable refusal of forced precision at boundaries.
What is not claimed
- Resolution of all vagueness in natural language.
- Philosophical completeness of the approach.
Phase II claims — Metacomputation
These claims cover the orchestration layer (metacomputation) and its measurable effects on cost and safety.Claim RM-M1 — Metacomputational orchestration
Claim
Reasoning resources can be allocated dynamically through metacomputation while preserving structural invariants.Validated in
Phase II — metacomputational orchestration systemWhat is demonstrated
- Accept / refuse / escalate decisions occur prior to generation.
- No silent failure or bypass observed across > 1,200 test cases.
What is not claimed
- Optimal resource allocation in all domains.
- Independence from extractor quality.
Claim RM-M2 — Cost–safety frontier
Claim
There exists a measurable trade-off between computational cost and safety that can be navigated architecturally rather than statistically.Validated in
Phase II — security and consistency benchmarksWhat is demonstrated
- Significant accuracy improvement on structural classification benchmarks.
- Substantial reduction in token usage versus frontier baselines.
What is not claimed
- Universal cost reduction.
- Improvement on tasks outside the structural classification scope.
Claim RM-M3 — Preservation of Phase I invariants under orchestration
Claim
Structural invariants validated in Phase I are preserved under multi-model orchestration.Validated in
Phase II — metacomputational orchestration systemWhat is demonstrated
- Fail-closed behavior maintained.
- Paradox detection unaffected by tier escalation.
What is not claimed
- Completeness of domain invariant detection.
- Maturity of extractor-free reasoning.
Explicit non-claims
Status summary
| Claim category | Status |
|---|---|
| Architectural definition | Established |
| Structural competence (Phase I) | Validated |
| Metacomputation (Phase II) | Validated |
| Domain invariants via extraction | Open |
| Autonomous RM evolution | Research |
References
- Representation Models (RM): Explicit Representation, Controlled Reasoning, and beyond Language-Centric AI, Simone Mazzoni, RCUBEAI Research Lab, 2026.
This claims page is normative: it defines what RCUBEAI stands behind publicly, and what it does not.