to2d

A space for ideas, notes, and ongoing work.

Closed-Loop Verification Architecture

Proposals meet guarantees

LLMs produce proposals, not guarantees. Verifiers produce guarantees, not proposals.
A reliable agentic system emerges only when the two are fused into a closed-loop architecture where every operator step is followed by a deterministic verification step that stabilizes the system before state transitions occur.

This document formalizes that architecture and includes real examples that reveal why this pattern is necessary for enterprise automation, browser workflows, and document-heavy systems.

1. Why LLM-driven systems need closed loops

LLMs are nonlinear operators with:

  • unpredictable manifold drift,
  • domain-sensitive outputs,
  • attractor bias,
  • no inherent error boundaries.

Open-loop use (just "call the model and hope") leads to:

  • compounding errors,
  • unstable multi-step plans,
  • hallucination propagation,
  • runaway loops in browser agents.

Closed-loop verification stops the instability.

2. The closed-loop structure

Every agent step follows the same cycle:

1. Prepare:      Project state → canonical slice
2. Propose:      LLM/operator produces uₜ
3. Verify:       Deterministic validator produces yₜ
4. Integrate:    Update state with verified transition
5. Feedback:     State feeds next cycle

Formally:

zₜ   = Pₛ(xₜ)
uₜ   = f(zₜ)
yₜ   = V(uₜ)
xₜ₊₁ = I(xₜ, yₜ)

The system is stable only when V enforces strict domain boundaries.

3. What the verifier actually does

A verifier is not an LLM.
It is a deterministic function that enforces:

  • structure,
  • preconditions,
  • invariants,
  • domain constraints,
  • failure semantics.

Verifier Outputs

  • Valid: output passes all checks → accepted
  • Invalid: output violates structure → rejected
  • Corrected: output is fixed using deterministic rules
  • Escalated: output is impossible → trigger fallback or human review

4. Real examples

Example 1: Browser actions

Operator:

actions = f(DOM_slice)

Verifier:

  • does each action target a valid selector?
  • does the sequence avoid unsafe patterns?
  • does each step match the page's actual DOM state?

Fake or incorrect selectors are caught before execution.

Example 2: Document extraction

Operator:

fields = f(document_section)

Verifier:

  • are required fields present?
  • do types match schema?
  • do values obey format constraints?

If the operator invents a value (hallucination), verification blocks it instead of letting it propagate.

Example 3: Compliance workflow

Operator:

steps = f(employee_data)

Verifier:

  • cross-check against statutory rules
  • check ordering constraints
  • detect missing dependencies

Even if the model tries to "guess" a missing rule, the verifier stops it.

5. Why verification stabilizes the system

Prevents drift

Incorrect outputs never enter the state, so the trajectory stays on the correct manifold.

Limits error propagation

Even if f produces an unstable output, I only integrates validated transitions.

Creates predictable boundaries

Schemas + invariants + deterministic correction make behavior inspectable.

Enables reusability

Because every step is verified, operators can be mixed, swapped, or pipelined safely.

6. Verifier design patterns

1. Schema validators

  • JSON schema checks
  • structural constraints
  • field-level invariants

2. Semantic validators (deterministic)

  • referential integrity between fields
  • rule-based business logic

3. State-dependent validators

  • DOM-state validation
  • step precondition checks

4. Cross-step consistency checks

  • ensure outputs don't contradict earlier transitions

5. Error-correction rules

  • prune unnecessary fields
  • canonicalize values
  • normalize formats

7. Why retries are not verification

Retries only repeat the same operator on the same domain.
Verification ensures correctness before integration.

Example

If an LLM proposes a browser action targeting a nonexistent selector:

  • Retry → repeats the incorrect proposal
  • Verification → rejects immediately and requests new projection or fallback

8. Closed-loop architecture is the backbone of verifiable agents

Most agent failures come from:

  • mixing domains,
  • large noisy context,
  • unverified outputs,
  • drifting latent trajectories.

Closed-loop systems correct this by enforcing a tight, deterministic supervision layer around every operator.

This produces:

  • traceability,
  • stable multi-step workflows,
  • consistent state transitions,
  • enterprise-grade reliability.

9. Example: End-to-end loop in a payroll workflow

  1. State: employee profile + jurisdiction
  2. Projection: extract only required fields for this payroll cycle
  3. Operator: LLM proposes payroll adjustments
  4. Verifier:
    • check statutory rules
    • validate tax table alignment
    • detect missing deductions
  5. Integrator: update payroll record
  6. Feedback: next cycle uses updated state

This is not fancy. It is correct.

10. Research directions

  • Designing minimal sufficient verifiers
  • Operator-verifier co-design for stability
  • Schema generation models with formal guarantees
  • Attractor mapping under verified feedback loops
  • Compositional stability analysis for multi-agent architectures

Closed-loop verification is the core mechanism that makes LLM-driven systems reliable, inspectable, and suitable for production-scale automation.

← Back to AI Era