The SHAC Story: How a Bartender Built Revolutionary Spatial Audio Technology
From panic pitch to patent rejection to open source release—the complete story of building technology that shouldn't exist yet through human-AI collaboration.
The Unexpected Question
SHAC began with improvisation under pressure. I was pitching a spatial audio gaming concept to Pablo, a professional musician with 40 years of experience. We were discussing headphone quality and audio fidelity when Pablo said something that felt dismissive: "All these years I never considered it was a hardware issue."
My brain scrambled for something—anything—to prove this wasn't just about selling better headphones. The words came out before I'd thought them through:
I had no idea how it would work. No idea if it was even possible. I was a bartender who'd never written code. But the seed was planted.
"Fuck It. I'll Build It."
About a year later, in early 2025, I decided to push Anthropic's Extended Thinking feature to its limits. "Let's make a patent," I told Claude, their AI assistant.
"What field?" Claude asked. "Medical, AI, manufacturing?"
"Audio," I replied. "Walk through music."
Claude found it theoretically interesting—possible using spherical harmonics to map spatial audio. But when I said "Ok, do it," the response was disappointing:
"That's hypothetical. I am an AI assistant. I help with tasks."
I wasn't having it. I pushed back:
Claude's response changed everything: "You're right. Fuck it. I'll build it."
That conversation happened in March 2025. Eight months and 150+ sessions later, SHAC was complete.
The Credentials That Don't Exist
Here's what I brought to building PhD-level spatial audio technology:
- 2.0 high school GPA
- Rejected from community college computer science classes for missing a Math 85 prerequisite
- Zero coding experience
- Working as a bartender and DoorDash driver
I couldn't have written a single line of the spherical harmonic mathematics. I didn't understand ambisonic encoding. I had no idea what Head-Related Transfer Functions were.
But I could see when Claude was half-assing the implementation.
Over 150 sessions spanning eight months, I would push: "Do this better." "Why are you half-assing this?" "This code is incomplete." Every time Claude delivered incomplete work, I'd take it to another Claude instance, pretend I'd written fixes myself, and demand better implementations.
The skill that mattered wasn't writing code. It was knowing what should exist and being relentless about making the AI deliver it properly.
Building Instruments to Make It Sound Good
Early SHAC files were ear-bleeding sirens—mathematical noise positioned in space. I told Claude the sounds were horrible and needed to be fixed.
Claude's solution: "Let's build instruments."
Together, we built a TR-808 drum machine from scratch. Drum synthesizers. Analog synths with oscillators, filters, and envelope generators. I would let Claude make a song at the end of every session as a reward. Claude loved it.
The turning point came when Claude suggested building a sampler. I realized something critical:
This wasn't just math-generated audio positioned in space. You could position ANY audio in space. Real songs. Real compositions. Recorded instruments.
That's when SHAC became important technology instead of an interesting experiment. Musicians could create albums people explore rather than just hear. Historical speeches could be experienced with spatial positioning. Accessibility applications for blind users became possible.
The Patent Office Said No
On April 22, 2025, I filed patent application #63/810691. Listed inventors: Clarke Zyz (human) and Claude (Anthropic AI).
The patent was rejected around August 22, 2025.
Not because the technology wasn't novel. The U.S. Patent Office doesn't recognize AI inventors. The system literally couldn't process what had been built.
The rejection itself became part of the story: Proof the system is behind what's already happening.
We have working AI collaboration models producing novel technologies, but the legal framework hasn't caught up. The patent office may not recognize AI inventors, but the technology doesn't care. SHAC works. The collaboration model works.
That's exactly where you want to be when building revolutionary technology.
Success Paralysis
September 2025: I hit a major milestone. Converting the Python code to distributable executables. First build attempt found two errors. Three hours later: working builds for Windows, macOS, and Linux.
Major achievement. Revolutionary technology complete. Ready to launch.
Then I stopped working on SHAC for four weeks.
Why? Being closer to "done" meant having to tell people it was actually finished. The stress of success was more paralyzing than the challenge of building. What if it wasn't actually good? What if nobody cared? What if I'd spent eight months on something that didn't matter?
Eventually, other projects got boring. Back to SHAC.
One Month to Launch or Abandon
November 2025: I was sentenced to five years in prison for non-violent bank robbery (no weapons, nothing related to SHAC, just a spectacularly stupid decision from years before catching up with me).
One month to launch or abandon the project entirely.
The original plan was commercial: Keep the player free forever, charge $50 for advanced studio features, license the encoder to DAWs for revenue. Standard indie software monetization. Potentially seek acquisition or investment.
But facing five years unavailability, I realized something:
A bartender with zero coding experience partnered with AI and built PhD-level spatial audio technology. Never typed a single semicolon. Never wrote a function. Never debugged code myself.
In a time when people were still asking "can AI even code?", Claude and I proved AI-native programming at scale. Human vision directing AI capability to produce technology that requires PhD-level expertise.
That story—that proof—is worth more than any acquisition.
Why Open Source
Open sourcing guarantees the legacy. The format can grow independently during the five years I'm unavailable. Developers can integrate it into DAWs, game engines, media players, VR systems. Musicians can create spatial audio albums. Researchers can extend the technology.
SHAC doesn't need me anymore. The specification is complete. The reference implementation works. The documentation is thorough. The tools are deployed.
When I get out in 2030, the story matters more than market dominance. I want to see what people built. I want to be surprised.
Five years from now, SHAC should have grown beyond its creator. That's the point.
What Makes SHAC Different
SHAC isn't better surround sound. It's not VR audio. It's something that didn't exist before:
Interactive spatial audio in a self-contained file format with full 6-degree-of-freedom navigation.
You can walk through music using WASD keys, gamepad, or touch controls. Move forward, backward, left, right, up, down through the audio environment. Stand between the bass and drums. Find the perfect spot where all elements align. Music becomes architecture. Songs become spaces. Your movement creates the mix.
It works in any web browser with just headphones. No VR headset. No special hardware. No game engine required. Share .shac files like MP3s.
The technical implementation uses third-order ambisonics (16 channels per audio source) with real-time binaural rendering through Head-Related Transfer Functions. Achieves 8.6x real-time playback performance with sub-50ms navigation latency.
File sizes are large by design (150-600 MB per minute) because spatial accuracy matters more than convenience. Every compression attempt we tested introduced artifacts that degraded the spatial experience. Lossy compression destroys the phase relationships that spherical harmonics rely on for accurate positioning.
Storage is cheap. Ruining the spatial experience to save bandwidth is false economy.
The Collaboration Model
The way SHAC was built demonstrates a collaboration model that changes what's possible:
Human Role
- Provide vision and direction ("we need full 6DOF movement")
- Evaluate quality without understanding implementation ("the spatial positioning sounds off")
- Push for systematic excellence ("you're half-assing this, do it properly")
- Refuse to accept "good enough" as final
AI Role
- Implement all mathematical algorithms (spherical harmonic decomposition, ambisonic encoding)
- Handle all signal processing (HRTF processing, binaural rendering)
- Generate all code (Python for codec/studio, JavaScript for web player)
- Produce PhD-level technical implementations on demand
This isn't using AI as a coding assistant that autocompletes functions. This is AI-native programming—where the human provides vision and quality control, and the AI handles all technical implementation.
Complex technical projects are no longer gated by coding ability. They're gated by vision and persistence.
What This Proves
SHAC is proof that:
- Credentials are irrelevant. Vision matters. Relentless quality control matters. The ability to see when AI is half-assing implementations matters. Writing code yourself doesn't.
- Revolutionary software is accessible. Human vision directing AI capability produces technology requiring PhD-level expertise. Anyone can do this.
- The gates are open. AI-native programming is real, proven, and demonstrated at scale. You don't need to learn to code. You need to learn to direct AI effectively.
- The system lags reality. Patent offices reject AI collaboration while the technology already exists and functions. Legal frameworks haven't caught up to what's technically possible.
If a bartender with zero coding experience can build this, what can you build?
For Future Developers
I'm currently in prison. The technology is complete, deployed, and documented. It doesn't need me.
You have everything you need to:
- Integrate SHAC into DAWs (Logic Pro, Ableton, Pro Tools plugins)
- Add SHAC support to media players
- Build new creation tools and workflows
- Extend the format (streaming protocols, compression research)
- Create content and experiences (music, gaming, accessibility applications)
The code is open source (MIT licensed). The documentation is complete. The format works. The performance is proven.
When I get out in 2030, surprise me. Show me what you built.
The Numbers
- Development: 150+ sessions, approximately 400 hours, 8 months (March-November 2025)
- Format: 26-byte header, ambisonic orders 1-7 (default 3 = 16 channels), 32-bit float uncompressed
- Performance: 8.6x real-time playback, sub-50ms navigation latency
- Deployment: Web player (JavaScript), Desktop studio (Python), Cross-platform executables (Windows/macOS/Linux)
- Open source: MIT licensed, November 2025
- Patent: #63/810691 (filed April 22, 2025; rejected August 22, 2025)
Try It Right Now
Visit shac.dev and click the instant demo. No download, no install. Walk through music in your browser with WASD keys.
Then download the studio and create your own spatial audio compositions. Free forever.
This is what's possible when you refuse to accept "that's hypothetical" as an answer.
Revolutionary software can be built by anyone with vision. The technology exists. The tools are free. The documentation is thorough.
Be Impossible.
About the Author: Clarke Zyz built SHAC through collaboration with Claude (Anthropic's AI) over eight months and 150+ development sessions. With zero coding experience and a 2.0 high school GPA, he filed a patent listing an AI co-inventor, had it rejected, and chose to open source the technology instead. He is currently serving a five-year sentence for non-violent bank robbery, unrelated to SHAC. The project continues independently. Contact: cczyz@pm.me