Before the current generation of AI systems, the challenge was simple but fundamental: how do you map intent to behavior without relying on natural language as the interface? This work predated LLMs entirely, and it shaped a different kind of architecture — one where intent was represented structurally, not conversationally.
This became a precursor to modern operator systems, verifiable agents, and structured planning.
Intent is not language
Most systems treat language as the source of intent.
But language is ambiguous, lossy, and overloaded with context.
Early experiments made something clear:
Intent is a state transition, not a sentence.
A person wants:
- a position reached,
- a document processed,
- a workflow completed,
- a system updated,
- a condition satisfied.
The linguistic form is incidental.
This realization led to representing intent as structure, not text.
How intent was represented before LLMs
Without natural-language models, intent needed alternate forms:
1. Desired final state
Representing the goal as a target state vector:
- position = x
- workflow node = completed
- field = extracted
- jurisdiction rule = applied
2. Constraint sets
Intent defined as "all states satisfying these inequalities."
3. Behavior templates
Mapping signals or user choices into predefined, safe action primitives.
4. Fuzzy inference rules
Where membership functions encoded degrees of intent:
- "move left a little"
- "stabilize here"
- "slow down near boundary"
5. Graph positions
Intent expressed as the next valid node in a workflow DAG.
The common theme: intent is never free-form.
It is encoded into a domain the system can execute.
Early intent engines were proto-operators
Years before language models, systems were already using:
- PID controllers,
- fuzzy controllers,
- state-transition tables,
- rule-based planners,
- action graphs.
These engines took intent in a structured form and produced deterministic actions.
No speculation.
No free-form reasoning.
No hidden context.
Just structure → operator → behavior.
This is precisely the architecture used in verifiable agents today.
Why intent needed rewriting before execution
Mapping intent directly to actions is unstable.
Intent must first be rewritten into a machine-solvable domain:
- browser → actionable DOM slice
- document → target field schema
- workflow → dependency graph node
- control system → error signal
- payroll → jurisdictional rule set
- compliance → constraint template
This rewrite is not optional — it is the core of To2D (Transfer of Two Domains):
Human-visible intent → machine-solvable representation.
Once rewritten, the operator can act predictably.
How this led to modern planning architectures
The early work made several structural ideas obvious:
1. Planning should be structural, not linguistic
A plan is a graph, not a paragraph.
2. Intent resolution is a domain extraction problem
Identify the domain before selecting an action.
3. Operators should respond to canonical representations
Not to conversational ambiguity.
4. Deterministic transitions beat inferred sequences
Reliability > creativity in automation.
These ideas re-emerged later in:
- deterministic planning via structural constraints,
- operator composition pipelines,
- closed-loop verification architectures.
Why intent-without-language matters today
Even with LLMs, the principle still holds:
- natural language is too ambiguous for deterministic automation,
- free-form intent creates drift,
- conversational planning is unstable,
- multi-step tasks collapse without structure.
Structured intent is still the only reliable substrate for:
- browser automation,
- HR and payroll workflows,
- compliance reasoning,
- document pipelines,
- multi-agent systems.
LLMs improved the operators — not the fundamental nature of intent.
The principle in one sentence
Intent is not a phrase.
Intent is a target state.