to2d

A space for ideas, notes, and ongoing work.

Why Context Is Not the Solution

More tokens increase entropy, not reliability

Adding more tokens does not improve reliability. It increases entropy, destabilizes latent trajectories, and amplifies domain mismatch. This section formalizes why context-heavy prompting fails at scale, especially in agentic systems.

1. Context expands the input space, not the structure

LLMs encode the entire prompt into a latent state. More context means:

  • more competing patterns,
  • more unresolved domains,
  • more statistical ambiguity.

The model cannot "prioritize" the correct parts unless the representation is shaped before encoding.

Key point

Longer context increases uncertainty because it adds more degrees of freedom to the latent state.

2. Latent-state destabilization

Encoding is a projection:

sequence → embedding → latent state

This projection is not linear and not domain-aware.

When context grows:

  • the manifold region widens,
  • the encoded state becomes less constrained,
  • navigation becomes more chaotic.

Instead of moving within a tight subspace, the model drifts through high-variance regions.

This is the opposite of reliability.

3. Attention amplifies the wrong signals

Attention does not select "relevant meaning." It selects patterns:

  • repetitive structures,
  • frequent motifs,
  • statistical correlations,
  • dominant syntactic rhythms.

With more context:

  • spurious patterns become dominant nodes,
  • structural noise overpowers the intended domain,
  • the model latches onto whichever pattern is the most statistically strong.

Result

The operator amplifies noise.

4. Mixed-domain context collapses the representation

Most prompts contain:

  • instructions,
  • examples,
  • partial traces,
  • system messages,
  • unrelated history.

Mixing domains forces the model to embed everything into one compressed vector.

This causes:

  • cross-domain interference,
  • incorrect basin selection,
  • unpredictable attractor drift.

The model can only follow one trajectory. If domains conflict, the system defaults to the nearest attractor — not the correct interpretation.

5. Why adding more context increases hallucination

Hallucination occurs when the model falls into an attractor that does not match the intended domain.

Long context makes this more likely because:

  • the manifold becomes larger,
  • attractor boundaries blur,
  • the model takes a trajectory based on statistical dominance instead of structural correctness.

More tokens → more drift → more attractor collisions → more hallucination.

6. Why context hinders planning

Planning requires:

  • stable intermediate states,
  • predictable transitions,
  • constrained latent regions.

Large context violates all three.

The model must re-encode the entire sequence every step, injecting noise back into the latent state. Planning collapses under its own history.

This is why agent loops with long context devolve into:

  • repetitive text,
  • contradictory steps,
  • stuck loops,
  • unstable chains of thought.

7. Reliability = domain contraction, not expansion

The fix is not more context.
It is fewer, more precise representations.

This requires:

  • domain extraction,
  • canonical rewriting,
  • narrow schemas,
  • stable domain boundaries.

Reducing context contracts the manifold region that the model must navigate.

This is what makes trajectories stable.

8. Connection to 0-Context Architecture

0-context isolates the domain slice before calling the operator.
This eliminates:

  • cross-domain interference,
  • latent drift,
  • attractor ambiguity.

The model moves in a predictable region, producing:

  • reproducible outputs,
  • verifiable transitions,
  • stable control loops.

This is the only reliable foundation for agentic systems.

9. Research directions

Possible extensions of this work:

  • quantifying entropy expansion from additional tokens,
  • mapping attractor transitions under mixed-domain prompts,
  • designing canonicalization pipelines for enterprise workflows,
  • modeling planning stability under shrinking vs expanding context windows.

These form the empirical backbone for verifying agentic architectures built on domain isolation.

← Back to AI Era