$ cd /projects/real-o-tron
2026-02-24

← back

Real-O-Tron — Sensory Enhancement Hardware

GitHub: not yet


PRE — Idea · Setup · Build

Goal: Systematically augment human sensory input and observe
what the brain does with information it was never designed to
process. Not virtual reality. Not augmented reality. Augmented
perception.

The premise: reality is not what we think it is. It's what our
senses feed into the brain, and then the brain generates a model
from that data. That model is what we call "reality." It feels
solid, objective, fundamental. It's not. It's a construction.
A best guess. A render.

So what happens if you tamper with the input feed?

What if you take ultrasonic sound — frequencies above 20kHz that
humans literally cannot hear — and fold them back into the
audible range? Pipe that signal into headphones. Wear them 24/7.
Give the brain data from a spectrum it has no evolutionary context
for. And wait.

What would happen with an adult brain? The neural pathways are
mostly fixed. Maybe nothing. Maybe subtle pattern recognition
over weeks.

Theoretically, what would happen with a child's brain? Far more plastic,
constantly building new neural connections. Would it develop
entirely new perceptual categories for this input? Would it
learn to "hear" ultrasonic as naturally as we hear speech?

If we are just the sum of our input data, what kind of human
being would evolve from different input data?

This is Real-O-Tron. A multi-stage program to find out.

The Philosophical Origin

How did I arrive at this?

Two academic frameworks: clinical psychiatry research on altered
states of consciousness, and cognitive science on perception.

The literature on psychedelics describes a consistent phenomenon:
subjects report watching the brain's abstraction layers shift in
real-time. The construction mechanism of perceived reality becomes
visible. Layers of abstraction tied to specific states of
consciousness — observable, repeatable, documentable.

Rick Strassman (clinical psychiatrist, "DMT: The Spirit Molecule")
conducted the first FDA-approved clinical study and documented
consistent, repeatable experiences across subjects. The same
"places." The same "entities." Across different sessions, across
different people. If a hallucination is repeatable, is it still a
hallucination? Or is the brain tuning to a different channel?

The repeatability makes it feel deterministic. Which means
reality isn't a fixed thing you perceive — it's a frequency you're
tuned to. Change the tuning, change the reality.

This isn't fringe thinking. Donald Hoffman (cognitive scientist,
UC Irvine, "The Case Against Reality") has built a mathematical
framework proving that evolution optimizes for fitness, not truth
— our senses are an interface, not a window. Strassman's clinical
data and Hoffman's mathematical models point at the same conclusion
from different angles: reality is constructed, not given.

That insight is what Real-O-Tron was designed to test with
hardware instead of chemistry. Same hypothesis, different method:
modify the brain's input stream and observe what reality becomes.

Also: simulation theory. The quantum observer effect — particles
existing in superposition until observed, then collapsing into
definite states — is basically what every game engine does to save
GPU capacity. Don't render what nobody's looking at. In VR, Pimax
calls this "foveated rendering." Render high detail only where
the eye is focused. The universe appears to do the same thing.

If reality is a render, Real-O-Tron is an attempt to feed the
renderer data it wasn't expecting.

The Stages

Real-O-Tron was designed as a multi-stage sensory enhancement
program. Each stage adds a new augmented sense. Stack them.
See what emerges.

STAGE 1: ULTRASONIC HEARING
  Hardware: Ultrasonic USB microphone, Raspberry Pi (realtime
  DSP), battery pack, headphones, storage for raw data backup.
  Method: Capture ultrasonic frequencies (>20kHz). Fold them
  into the audible range via 10:1 decimation. Mix the translated
  signal with normal audio using a potentiometer or GUI to
  control overlay volume — so you can gradually "mix in" the
  invisible soundscape without being overwhelmed.
  Duration: 24/7 wear. Weeks to months. Let the brain adapt.
  Question: Will the brain learn to extract meaningful patterns
  from ultrasonic data that has been frequency-shifted into its
  processing range?

STAGE 2: INFRARED VISION
  Hardware: IR camera, industry-grade AR/VR glasses, realtime
  image processing pipeline.
  Method: Capture infrared light. Fold it back into the visible
  spectrum. Overlay it onto normal vision via transparent AR
  display. Literally become the Predator.
  Question: Can the brain learn to interpret thermal signatures
  as a natural part of visual perception?

STAGE 3: DIGITAL OLFACTORY ENHANCEMENT
  Hardware: TBD — electronic nose sensors, scent synthesis.
  Method: Detect chemical signatures below human threshold.
  Amplify and translate into perceptible scent cues.
  Question: Can smell be augmented the way hearing and vision
  can?

STAGE N: STACK THEM ALL
  Run all augmented senses simultaneously. Long-term. See what
  kind of consciousness emerges when the brain is processing
  three or more data streams it never evolved to handle.

Floating tanks subtract sensory input to alter consciousness.
Real-O-Tron adds it. Same experiment, opposite direction.

POST — Learnings · Afterthoughts · Timeline

What happened:

Nothing. Yet.

The ultrasonic microphone was purchased. The Raspberry Pi was
available. The DSP pipeline was designed in my head. But the
project stayed in the planning phase — detailed, mapped out,
philosophically complete, but never physically assembled.

This is not a dead project. It's a patient one. The hardware is
cheap. The concept is sound. The question is fascinating. It
just needs a block of time where I'm not building FPGA DACs or
training AI companions or organizing Nerd Nites.

Learnings:
  - The most interesting projects are the ones that test
    fundamental assumptions about reality. Not "can I build
    this?" but "what IS this?"
  - The "reality enhancement" thread running through my projects
    (urGlass, Real-O-Tron, ChordKiller, Predator, Realify) is
    not a coincidence. It's the central obsession. Everything
    else is a side quest.
  - If reality is a render, every sense is an API endpoint.
    Real-O-Tron is an attempt to call undocumented endpoints.

Timeline:
  - 2025: Concept fully mapped. Hardware partially acquired.
    Ultrasonic mic purchased. DSP pipeline designed. Parked
    in favor of more immediately buildable projects.
  - 2026: Still on the list. Still fascinating. Still patient.

Status: WIP. Waiting for a gap in the build schedule. The
  universe isn't going anywhere. Probably.