to2d

A space for ideas, notes, and ongoing work.

Enterprise Application Models

Where theory becomes production systems

The architectures described in the previous sections — transfer-function view of LLMs, zero-context, state-space modeling, operator composition, verification, correction loops — are not abstract theory. They map directly onto concrete enterprise domains:

  • HR systems
  • Payroll
  • Compliance
  • Onboarding
  • Document-heavy back-office workflows
  • Browser-based operations for internal tools and vendor portals

This section makes that mapping explicit.

1. Why enterprise workflows break under naive agents

Most "AI automation" attempts in enterprise environments fail because they:

  • treat the LLM as an oracle,
  • rely on long context and chain-of-thought,
  • mix multiple domains (HR + legal + payroll) in a single prompt,
  • have no deterministic verification layer,
  • lack explicit failure paths.

Symptoms:

  • workflows that sometimes work in staging but fail in production,
  • brittle flows that break when a UI changes,
  • hallucinated compliance steps or missing ones,
  • unreproducible behavior (same input, different output),
  • zero auditability.

The models are not the problem. The architecture is.

2. HR: Employee Data as State, Not Context

HR systems revolve around employee state:

  • personal info
  • role, location, compensation
  • eligibility and status
  • lifecycle events (hire, promotion, termination)

Naive agent approach

"Here is the full employee history, job description, contracts — figure out what to do."

This mixes:

  • multiple timeframes
  • multiple purposes (legal, payroll, internal notes)
  • irrelevant attributes

Architecture-aligned approach

Treat employee info as structured state:

  • State-space: xₜ = employee_state
  • Domain extractor: subset relevant to the current operation (e.g., benefits change)
  • Rewriter: align to canonical schema (location, role, effective date, etc.)
  • Operator: propose actions (update system, notify stakeholders, request approval)
  • Verifier: enforce HR policy + type and schema constraints
  • Correction: route ambiguous cases for clarification

HR operations become:

  • reproducible,
  • auditable,
  • easy to simulate and test.

3. Payroll: Jurisdictional Logic as Verified Operators

Payroll is naturally a graph of rules and transitions:

  • tax tables
  • deductions
  • benefits
  • jurisdiction-specific logic
  • effective dates

Naive agent approach

"Here is the paystub and employee data, compute the payroll and explain it."

This places:

  • tax logic,
  • computations,
  • compliance,
  • explanation

all inside one unconstrained LLM call.

Architecture-aligned approach

Break payroll into verified operators:

  1. State: canonical employee + compensation + jurisdiction snapshot
  2. Operator 1: determine applicable rules
  3. Operator 2: construct compliant sequence of calculations
  4. Operator 3: apply calculations
  5. Verifier: recompute totals and cross-check against rules
  6. Operator 4: generate human-readable explanations

Each step is:

  • isolated,
  • domain-specific,
  • verifiable,
  • testable with known inputs.

4. Compliance: Rules as Domain Boundaries

Compliance workflows — tax, labor, identity, KYC, regulatory filings — are essentially rule-constrained state machines.

Naive agent approach

Prompt the model with regulations + data and ask it to decide what to do.

This leads to:

  • missing required steps,
  • hallucinated obligations,
  • inconsistent interpretations across runs.

Architecture-aligned approach

Use compliance rules as domain constraints, not as text decoration:

  • State-space: compliance-relevant snapshot
  • Domain extractor: isolate only the fields that influence regulation
  • Rewriter: map into the rule engine / schema
  • Operator: propose required actions based on that schema
  • Verifier: check each proposed action against static and code-encoded rules
  • Correction: adjust or drop steps that violate constraints

This makes compliance workflows:

  • safe,
  • explainable,
  • measurable,
  • compatible with auditors.

5. Onboarding: DAGs Instead of Free-Form Checklists

Onboarding spans HR, payroll, IT, facilities, security, and often external portals.

Naive agent approach

"Generate a checklist for onboarding this employee in this region for this role."

Every run:

  • different ordering,
  • different coverage,
  • different depth.

Architecture-aligned approach

Represent onboarding as a graph of operators:

  • Nodes = onboarding steps (accounts, forms, trainings)
  • Edges = dependencies
  • State = which nodes are completed + current environment

LLM-powered operators:

  • fill in parameters for steps,
  • generate communication content,
  • extract required fields from documents,
  • propose next steps within the graph.

Everything else — ordering, completeness, correctness — is handled by the graph structure and verifiers.

6. Document Systems: From PDFs to Structured State

Enterprise environments are saturated with:

  • contracts
  • tax forms
  • compliance documents
  • payroll reports
  • benefits summaries

Naive agent approach

Pass the whole PDF into an LLM and ask for fields or summaries.

This yields:

  • inconsistent extractions,
  • hallucinated fields,
  • missing critical data.

Architecture-aligned approach

Pipeline of operators with verification:

1. PDF → text_extractor
2. text → section_locator
3. section → structure_normalizer
4. structure → field_extractor
5. fields → field_verifier
6. verified_fields → workflow / state update

Every stage is:

  • constrained,
  • testable,
  • replaceable.

7. Browser-Based Enterprise Operations

Many enterprise workflows depend on external portals:

  • government sites
  • benefits providers
  • banking platforms
  • payroll/tax authorities
  • legacy internal tools

Naive agent approach

An LLM controls a browser in an open-loop fashion with long context and natural language planning.

Breaks when:

  • DOM changes,
  • flows branch differently,
  • environment is slow or partial.

Architecture-aligned approach

Use browser operators with zero-context and verification:

  • State = current DOM slice + environment metadata
  • Domain extractor = visible, relevant interaction region
  • Rewriter = canonical DOM representation
  • Operator = propose next action (from finite primitive set)
  • Verifier = check selector existence, state alignment, preconditions
  • Integrator = execute action + update state

This is how browser automation becomes reliable.

8. Cross-Domain Workflows (where all of this connects)

Real enterprise flows cut across domains:

  • onboarding touches HR + payroll + compliance + IT + documents + external portals
  • tax filing touches payroll + banking + government portals + archival documents

The architecture handles this by:

  • treating each domain as its own state space,
  • isolating each domain with a dedicated extractor and rewriter,
  • composing operators into verified pipelines and graphs,
  • using correction loops when any step fails.

Instead of one "super agent," you get a system of small, reliable agents.

9. Why this model is different from typical agent platforms

Typical platforms:

  • are context-centric, not state-centric,
  • treat LLMs as oracles,
  • have weak or no verification,
  • do not have explicit failure semantics,
  • do not enforce domain boundaries.

This model:

  • is state-space first,
  • uses LLMs as operators,
  • enforces strict schemas and verifiers,
  • uses zero-context where possible,
  • designs explicit failure paths and correction loops,
  • composes everything into pipelines and DAGs.

This is what makes it suitable for:

  • regulated industries,
  • money movement,
  • employment and payroll,
  • high-stakes document processing,
  • large-scale operational automation.

10. Research and Product Direction

This architecture is not just theory. It suggests concrete product and research directions:

  • reusable operator libraries per domain (HR, payroll, compliance, docs, browser),
  • shared verification schemas and rule sets,
  • failure taxonomies per vertical,
  • standardized planning representations for enterprise workflows,
  • metrics and benchmarks for verifiable agents, not just raw LLM performance.

Enterprise Application Models are where the research surfaces as real systems.
They are the proof that this architecture is not just elegant — it is practical.

← Back to AI Era