to2d

A space for ideas, notes, and ongoing work.

Transforming the Problem

How domain changes reveal solutions

Most hard problems don't become solvable by adding more information or more computation. They become solvable when the domain of the problem is changed. This was an early theme in my work long before modern AI systems existed: the realization that every problem has multiple possible domains, and that the structure of the domain determines whether the problem is tractable, stable, or even meaningful.

Why the domain matters more than the method

The same problem can look impossible in one representation and trivial in another. Control systems taught this early:

  • a nonlinear dynamic system in the time domain is chaotic,
  • the same system in the Laplace domain becomes algebraic,
  • the same system in state-space becomes modular and composable.

Nothing about the world changed — only the domain did.

This insight transferred cleanly into software automation years before LLMs: if a system behaves unpredictably, it's usually because the task is being represented in the wrong domain, not because the system is "complex."

Early experiments in domain inversion

This phase of work looked like:

  • rewriting physical problems into alternate coordinate frames,
  • decomposing tasks into representations that exposed underlying structure,
  • converting noisy sensor data into canonical domains,
  • rewriting intent into machine-solvable domains,
  • mapping environmental states into stable representations.

The pattern was always the same:

Change the domain → reveal the solution.

Sometimes this meant transforming a dynamic system into a frequency domain. Other times it meant collapsing a messy high-dimensional input into a simpler canonical form. The common thread: once the domain was right, complexity reduced naturally.

Domain transforms as tools for clarity

A domain transform is any operation that:

  • removes irrelevant degrees of freedom,
  • normalizes structure,
  • aligns the representation with a solvable manifold,
  • reduces entropy,
  • keeps only the variables that matter.

Examples from early research:

1. Coordinate transforms

Switching between world-space, body-space, or task-space representations.

2. Signal-domain rewrites

Turning noisy time-series into structured frequency components.

3. System inversion

Expressing behavior in terms of error dynamics instead of absolute state.

4. Behavior-space mapping

Mapping desired outcomes into constraint sets instead of raw instructions.

These weren't academic tricks — they were survival tools. They made the problems solvable.

Representation as the real mechanism of control

What became clear over time:

Most algorithms don't solve problems directly.
They solve representations of those problems.

If the representation is wrong:

  • the algorithm fails,
  • the system drifts,
  • the outcome becomes unstable.

If the representation is correct:

  • simpler methods work,
  • stability emerges,
  • solutions are transparent.

This is the seed of the later frameworks:

  • domain extraction,
  • zero-context architecture,
  • operator composition,
  • verifiable agent systems.

They all come from this root idea:
A good representation is a control mechanism.

Domain rewriting in early automation prototypes

Years before LLMs, automation already required domain transforms:

  • converting browser UI into actionable DOM slices,
  • turning documents into canonical sections,
  • rewriting user requests into structured workflows,
  • mapping state machines into dependency graphs.

I didn't have the language for it then, but the underlying principle was the same as in physics and control systems:
rewrite the domain until the problem is solvable.

This mindset eventually became the backbone of modern automation architecture: To2D (Transfer of Two Domains) and domain intelligence.

Why transforming the problem matters today

All the later work — zero-context, state-space operators, correction loops, multi-step graph composition — sits on top of this foundation.

Because once you understand how to transform the problem:

  • complexity becomes optional,
  • stability becomes designed instead of hoped for,
  • automation becomes verifiable,
  • AI becomes a component, not a belief system.

The techniques evolved, but the core idea has remained unchanged:

Choose the right domain, and the problem rearranges itself.

← Back to Research Before AI