to2d

A space for ideas, notes, and ongoing work.

Dec 14, 2025

Determinism vs Stability

Designing Systems in Non-Deterministic Environments

Stable systems converge within bounds · Unstable systems diverge over time

Modern systems increasingly operate in environments that are not deterministic.

Inputs are noisy.

Interfaces change.

Timing varies.

Failures are partial.

Recovery matters as much as correctness.

Yet much of software engineering intuition is still grounded in a deterministic worldview. That mismatch is at the root of many brittle systems, especially those involving automation, AI, and real-world interaction.

This is not a failure of engineering discipline.
It is a mismatch of abstraction.

This distinction is closely related to model-coupled vs model-decoupled systems. Model-decoupled systems are inherently designed for stability over determinism — their correctness comes from architectural constraints, not model precision.

What Determinism Actually Solves

Determinism is a local property.

It is extremely effective for problems with:

  • Fixed inputs
  • Well-defined outputs
  • Closed environments
  • Minimal feedback

Examples include:

Data validation

Parsing known formats

Schema enforcement

Business rules

State serialization

In these domains, determinism is essential. It gives testability, predictability, and clarity. Removing it would make systems worse, not better.

The problem arises when determinism is extended beyond its natural scope.

Where Determinism Breaks Down

Many real-world systems are not functions. They are dynamic systems.

They involve:

  • Feedback loops
  • Delayed signals
  • Partial observability
  • Stochastic behavior
  • Adversarial or evolving environments
Functions: input → output · Systems: continuous feedback and adaptation

Examples include:

Browser automation

Distributed systems

Robotics

Control systems

Human-in-the-loop

AI-driven interfaces

In these systems, insisting on end-to-end determinism is not just expensive — it is counterproductive.

You can add more rules.

You can chase edge cases.

You can tighten constraints.

But complexity grows faster than correctness.

The Hidden Assumption Behind Determinism

Determinism assumes the environment cooperates.

That assumption quietly fails once:

  • Interfaces drift
  • Timing changes
  • Inputs become ambiguous
  • Retries interact
  • Failures are partial rather than total

At that point, deterministic logic doesn't converge. It oscillates.

More precision does not produce more reliability.

Stable systems track drifting targets · Deterministic systems break when targets move

Stability Is a Different Goal

Stability is a system-level property, not a local one.

A stable system is not one that always behaves the same way.
It is one that behaves within acceptable bounds over time.

Stability means:

  • Errors do not cascade
  • Retries converge instead of loop
  • Failures are detectable and classifiable
  • Costs remain bounded
  • Recovery is possible without panic

A stable system may produce different outputs across runs — and still be correct.

Systems vs Functions

A useful distinction:

Function

Correct or incorrect

Determinism optimizes functions

System

Converges or diverges

Stability governs systems

Most real-world problems that feel "hard" are hard because they are being treated like functions when they are systems.

Why AI Makes This Unavoidable

AI-driven systems expose this mismatch clearly.

AI operates in domains that are:

  • Ambiguous
  • Probabilistic
  • Context-sensitive
  • Non-deterministic by nature

Trying to force determinism at the system level leads to:

Brittle prompting

Excessive retries

Silent failure modes

Overconfidence in evals

Systems that degrade under pressure

The solution is not "better prompts" or "bigger models."
It is architectural stability.

Deterministic core · Non-deterministic boundary · Stability at the system level

The Right Division of Responsibility

Effective systems use both determinism and non-determinism — but intentionally.

Determinism belongs in:

  • • Constraints
  • • Validation
  • • Invariants
  • • Safety rails
  • • Termination conditions

Non-determinism belongs where:

  • • Interpretation is required
  • • Environments are noisy
  • • Adaptation matters

The system as a whole is not deterministic.
It is stable.

A Simple Test

Ask one question:

If the environment changes slightly, does the system recover or collapse?

If it collapses, more deterministic logic will not save it.

Only better system design will.

Why This Distinction Matters

Many teams interpret friction as failure.

In practice, friction often signals that:

  • Assumptions have met reality
  • Local correctness is no longer sufficient
  • Architecture, not logic, is now the bottleneck

At that stage, chasing determinism wastes time and money.

The work shifts from:

"Make this exact"

"Make this resilient"

Closing Thought

Determinism is a powerful tool.
But it is not the goal.

The goal is systems that remain correct under uncertainty, drift, and pressure.

That is stability.

And stability is not something you add with more logic.
It is something you design for from the start.