Skip to main content
This page enumerates the scientifically defensible claims associated with the Representation Model (RM) paradigm and its concrete instantiations at RCUBEAI.Each claim is explicitly linked to a validation phase, a scope of validity, and a set of non-claims.No claim on this page relies on speculation, extrapolation, or unpublished results.

On this page

How to read this page

1

Claim

The precise statement that is asserted.
2

Validated in

The experimental or validation phase supporting the claim.
3

What is demonstrated

What the evidence shows.
4

What is not claimed

Explicit boundaries preventing over-interpretation.
This structure is intentional. It ensures scientific clarity, regulatory readability, and protection against category errors.

Quick index

ClaimCategoryValidated in
RM-A1 — Representation-centric reasoningFoundationalRM Foundation (architectural definition) + Phase I
RM-A2 — Hierarchical representational OrdersFoundationalRM Foundation + Phase I
RM-A3 — Non-closure as a first-class computational signalFoundationalPhase 0 + Phase I
RM-P1 — Depth invariancePhase IPhase I — depth-invariance benchmark
RM-P2 — Competence decoupling from model scalePhase IPhase I — logical reasoning benchmarks
RM-P3 — Explicit handling of vagueness (Sorites)Phase IPhase 0 + Phase I (Sorites benchmarks)
RM-M1 — Metacomputational orchestrationPhase IIPhase II — metacomputational orchestration system
RM-M2 — Cost–safety frontierPhase IIPhase II — security and consistency benchmarks
RM-M3 — Preservation of Phase I invariants under orchestrationPhase IIPhase II — metacomputational orchestration system

Foundational architectural claims (RM paradigm)

These claims define the architectural commitments of the RM paradigm, independent of any single implementation.

Claim RM-A1 — Representation-centric reasoning

Claim

Reasoning can be architecturally decoupled from language-centric generation and performed over explicit internal representations.

Validated in

RM Foundation (architectural definition) + Phase I

What is demonstrated

  • Explicit representational objects can be isolated from generative language models.
  • Reasoning authority can be externalized from probabilistic token prediction.

What is not claimed

  • Replacement of language models.
  • Universal superiority over all language tasks.

Claim RM-A2 — Hierarchical representational Orders

Claim

Cognition and reasoning can be organized into autonomous representational Orders, each governed by its own validity and closure conditions.

Validated in

RM Foundation + Phase I

What is demonstrated

  • Orders function as coherent computational spaces.
  • Lower-Order competence constrains higher-Order reasoning.

What is not claimed

  • Psychological realism of Orders.
  • One-to-one mapping to human cognitive stages.

Claim RM-A3 — Non-closure as a first-class computational signal

Claim

Failure to achieve closure within a representational Order can be detected explicitly and used as a structural signal rather than masked by probabilistic smoothing.

Validated in

Phase 0 (early R3 prototype) + Phase I

What is demonstrated

  • Contradictions, paradoxes, and boundary failures are explicitly surfaced.
  • Systems can refuse to answer rather than hallucinate.

What is not claimed

  • That all non-closure cases are resolvable.
  • That refusal implies correctness of alternative answers.

Phase I claims — Structural competence

These claims are validated on Phase I evaluation artifacts, focusing on structural competence and invariants.

Claim RM-P1 — Depth invariance

Claim

Structural reasoning competence can be invariant to problem depth when represented explicitly.

Validated in

Phase I — depth-invariance benchmark

What is demonstrated

  • 100% accuracy across increasing depth (D=4–30).
  • Absence of attention decay observed in probabilistic models.

What is not claimed

  • Superiority on all deep reasoning tasks.
  • General logical omnipotence.

Claim RM-P2 — Competence decoupling from model scale

Claim

Structural reasoning competence is not proportional to language-model parameter count.

Validated in

Phase I — logical reasoning benchmarks

What is demonstrated

  • Small LLMs supervised by R3 outperform or match frontier models on 4/5 benchmarks.
  • Order-of-magnitude cost reduction.

What is not claimed

  • That small models are universally better.
  • That scale is irrelevant for all tasks.

Claim RM-P3 — Explicit handling of vagueness (Sorites)

Claim

Vague predicates and Sorites-type paradoxes can be detected explicitly and handled through principled refusal or qualification.

Validated in

Phase 0 (early R3 prototype) + Phase I (Sorites benchmarks)

What is demonstrated

  • Explicit structural classification of Sorites cases.
  • Stable refusal of forced precision at boundaries.

What is not claimed

  • Resolution of all vagueness in natural language.
  • Philosophical completeness of the approach.

Phase II claims — Metacomputation

These claims cover the orchestration layer (metacomputation) and its measurable effects on cost and safety.

Claim RM-M1 — Metacomputational orchestration

Claim

Reasoning resources can be allocated dynamically through metacomputation while preserving structural invariants.

Validated in

Phase II — metacomputational orchestration system

What is demonstrated

  • Accept / refuse / escalate decisions occur prior to generation.
  • No silent failure or bypass observed across > 1,200 test cases.

What is not claimed

  • Optimal resource allocation in all domains.
  • Independence from extractor quality.

Claim RM-M2 — Cost–safety frontier

Claim

There exists a measurable trade-off between computational cost and safety that can be navigated architecturally rather than statistically.

Validated in

Phase II — security and consistency benchmarks

What is demonstrated

  • Significant accuracy improvement on structural classification benchmarks.
  • Substantial reduction in token usage versus frontier baselines.

What is not claimed

  • Universal cost reduction.
  • Improvement on tasks outside the structural classification scope.

Claim RM-M3 — Preservation of Phase I invariants under orchestration

Claim

Structural invariants validated in Phase I are preserved under multi-model orchestration.

Validated in

Phase II — metacomputational orchestration system

What is demonstrated

  • Fail-closed behavior maintained.
  • Paradox detection unaffected by tier escalation.

What is not claimed

  • Completeness of domain invariant detection.
  • Maturity of extractor-free reasoning.

Explicit non-claims

To avoid ambiguity, the following are explicitly not claimed by the RM program at its current stage:
  • Artificial General Intelligence (AGI).
  • Human-level or super-human general reasoning.
  • Autonomous goal formation.
  • Universal task superiority.
  • Replacement of language models.

Status summary

Claim categoryStatus
Architectural definitionEstablished
Structural competence (Phase I)Validated
Metacomputation (Phase II)Validated
Domain invariants via extractionOpen
Autonomous RM evolutionResearch

References

  • Representation Models (RM): Explicit Representation, Controlled Reasoning, and beyond Language-Centric AI, Simone Mazzoni, RCUBEAI Research Lab, 2026.
This claims page is normative: it defines what RCUBEAI stands behind publicly, and what it does not.