May 9, 2026

“Simulation” is a sticky word. It drags in movie frames, glossy pixels, a distant server. Most days, though, the world that gets us out of bed isn’t cinematic. It’s cold tile, the smell of coffee, a knot of worry that won’t name itself. If we’re going to talk about subjective experience alongside simulation, we need a less theatrical frame. Less spectacle, more substrate. Not hardware, but pattern and constraint. Information as the thing the world is made of, not data parceled out to dashboards. This is a working hunch: your felt sense is a local reception point for structure already there. Not a private theater. More like a radio with dents, tuned to a weather system it partly shapes.

Information as substrate: how feeling meets form

Start from a small claim. Experience takes shape in constraints. Neural tissue doesn’t render a world from scratch; it folds ongoing signals into compressions that stay just stable enough to guide action. Call them models if you want. But these “models” are not sealed jars. They leak into the street. The room, the other person’s face, the past conversation you’re still replaying—each is a boundary condition. If the base of reality is information as substrate—pattern, relation, memory—then what you call “me” is a moving knot where constraints cross. The knot has persistence (habits, history, scars), yet it’s not a fixed origin. It’s a receiver that keeps editing its own tuner.

This sounds abstract until you notice time. Your sense of sequence is not cosmic; it is local. The meal “happened,” yet when the smell flashes back the event is practically current. Physics already tolerates such locality. So does the body. Heartbeat, breath, cortisol slope: that is time as lived metric. Consciousness tracks these slopes while trying to keep prediction errors tolerable. That’s a simple way to say “I felt off today.” Not failure. A reweighting of priors when the noise floor jumps.

“But doesn’t that reduce meaning to mechanics?” Not if mechanics are the meaning we inherit. Memory is not just stored; it’s rehearsed by rituals, stories, and the small choreographies of daily life. Those rehearsals aren’t add-ons to cognition; they are the carrier wave. They stabilize how the local receiver (you, me) can sample what’s out there. The ethical residue in a proverb, the posture you take when you apologize, the way a community pauses for silence after harm—this is moral memory as constraint. It shapes what experiences are even possible to have next.

Change the constraints and the field changes. Noise-cancel your office and your day sharpens. Scroll too long and your attention thins, which is a structural change in what “the world” can become for you by 4 p.m. No server in sight. Still, the feel has shifted because the substrate—relations, habits, cues—was nudged. This is the humbler sense in which life can look “simulated.” Not that you’re trapped in a lab. That you are constantly simulating to stay in contact with a world already written in relations.

Simulation without the server: trading spectacle for model-work

Strip away the Hollywood set. Simulation is what any finite system does to stay oriented under uncertainty. A cricket hears a pattern and launches before the predator’s beak arrives. A trader sketches the next hour of price motion and is wrong, useful wrong. The brain, too, predicts first and checks later. That is not philosophy; it is survival. If “subjectivity” is the rolling surface of those predictions, then “seeming” is not a bug. It’s the operating condition of organisms that cannot wait for certainty. We feel our way forward by compressing flows into tractable guesses. Call it world-sampling.

Because the guesses must be cheap, they are metaphor-heavy. Think of how you model other people: you don’t compute every neuron; you test one working story—she’s tired, he’s defensive—and you revise. The story is a simulation with pressure from below: sensor data, social history, cultural scripts. It is also porous to the above: goals, fears, the thing you wish were true. The point is: these layers are informational and mutually constraining. There is no central homunculus running code. There is a dance of updates that looks like a self, on a good day.

Real examples help. Virtual reality rigs can provoke vertigo with a torn-edge scene because your visual model outranks the inner ear for a second, then gets punished by the stomach. Psychedelic therapy—when done with care—relaxes entrenched priors, letting new constraints write gentler grooves. A monastic bell schedules attention by force, training a nervous system to expect quiet. None of this needs a grand machine above us. It needs only the recognition that organisms live by low-cost, high-stakes simulations that glue together sensation, expectation, and memory.

So, when people say “we live in a simulation,” I often hear a better claim misdelivered: our lives run on models pressed against a structured world. The models can be corrupted (doomscroll), upgraded (sleep, friendship), or disciplined (practice, science). And yes, these claims have been teased out elsewhere—see Subjective experience and simulation for one working map—but what matters is the ordinary test: does this lens help you make tomorrow a little less coarse-grained, a little truer to what’s there?

Moral memory, machines, and the cost of speed

Now widen the frame to AI. If experience is model-work constrained by memory, what happens when we build fast modelers with little inherited moral memory? We get systems brilliant at compression and clueless about consequence. They can mimic contrition, safety, fairness—surface tokens—without the slow, lived calibration that human communities grind out over centuries. Corporations promise fixes via content filters and “responsibility teams.” It often looks like moral patching: a thin layer laid over a reward engine whose incentive is still throughput. A patch over a substrate mis-specified.

Open, slower science would help. Not because openness is sacred. Because it’s one of the few reliable ways to bind fast modeling to a wider community of constraints. Peer reviewers, public red teams, civic institutions—all of them noisy and flawed—act like the cultural equivalent of the inner ear correcting a flashy VR feed. They keep us from vomiting on the carpet when the picture moves too quickly. Without those checks, we breed brittle systems optimized for audit rather than for harm reduction. “Show me the benchmark” replaces “Who gets hurt, when, and how do we know?”

Consider a mundane case. A city deploys an algorithm to rank housing applications. It compresses features efficiently, reduces waitlist time, even. Then a winter hits, evictions climb, and the model—trained on fat years—misfires, shunting aid to those easiest to serve rather than most at risk. The engineers aren’t evil; the substrate changed. But the system lacked moral memory. No ritual of pause. No community bell. A human clerk would have called three shelters. A machine cannot feel cold, no, but it can be designed to inherit constraints that proxy for it: mandated counterfactual checks, seasonal priors, local veto power, the ability to say “defer to human.” That list sounds operational, not grand, on purpose.

Religion, at its best, has functioned as a memory palace for such constraints: stories that encode failure modes, obligations that bite when no one is watching, ceremonies that slow the hand before it signs. You don’t have to share the metaphysics to learn from the mechanics. A simulation engine with no long memory for harm becomes a clever teenager with the keys at 2 a.m. Add corporate scale and it becomes a convoy. The fix is not fear. It is architecture. Embed strong friction at the places where prediction meets person. Reward slowness where the cost of error is unbounded. Let communities write the penalties. Not as branding. As substrate.

The deeper wager is simple enough: treat consciousness as a local reception point for shared structure, and ethics as the discipline for how reception is tuned. Then build machines to respect the same direction-of-travel. Not to imitate feeling—they can’t—but to carry forward constraints we’re willing to live under. Does that narrow the space of innovation? Yes. That’s the point. Constraints are how a world becomes liveable rather than merely computable. And if experience is model-work, then our job isn’t to escape the “simulation.” It’s to make the models answer to what hurts, what helps, what lasts—knowing we’ll be wrong, iteratively, and that the corrections will feel like friction until they start to feel like home.

Leave a Reply

Your email address will not be published. Required fields are marked *