Building in the Middle of Uncertainty
Imagine if AI could do everything for you with just your intent, any visual, job or task instantly completed giving you time and value back. It's not a dream, with AI it's an innovation we'll see in our lifetime. Most innovation doesn’t move in straight lines. It moves through partial maps, wrong assumptions, and long stretches where you’re not sure whether you’re making progress or just rationalizing effort.
I’ve spent most of my career inside complex systems. They rarely fail because the goals are too ambitious. They fail because assumptions pile up faster than understanding. Intent blurs. Tradeoffs get abstracted away. Eventually the system behaves in ways no one intended.
The last eighteen months were a conscious decision to step back into that uncertainty—not to reinvent myself, but to get closer to the work. To build end to end. To feel the constraints directly. To stop relying on abstractions that hide where things actually break.
What follows isn’t a success story or a roadmap. It’s a record of staying with a problem long enough for it to change shape.
I wanted to start a software company since the dot-com boom.
I started with a mobile app because it was the smallest honest place to begin.
I wasn’t trying to build a platform or prove a thesis. I wanted to understand what it meant to build something useful for people whose work I respect—artists, educators, coaches, makers—at a moment when AI was quietly changing how value gets recognized.
That instinct came from experience, not strategy.
Running an art gallery taught me more than any product role ever did. It didn’t scale. It didn’t last. But it showed me how much meaning comes from making space for people to create and be seen—and how fragile that space is when systems don’t account for it.
A mobile app was concrete. Something I could touch, test, and break without pretending I knew where it would lead.
Learning to Sit with Ambiguity
What surprised me wasn’t how hard it was to build. It was where the difficulty lived.
The problems weren’t features or interfaces. They were assumptions.
Platform limits. Model inconsistency. Timing failures. Paths I hadn’t walked yet, revealing gaps I didn’t know existed. The moment AI entered the system, certainty dropped. Every decision exposed another unknown.
That’s the part that doesn’t show up in demos. Things work—until they don’t. And when they fail, the reasons are rarely obvious.
Most of the work became about staying with that discomfort. Not rushing to label the system brittle or the idea flawed, but tracing where nondeterminism leaked into places I assumed were stable.
So the focus shifted. Less energy on outcomes. More on structure.
I started building constraints that made behavior legible: pipelines that could be replayed, gates where uncertainty mattered, logs that held up after the fact, and environments that revealed what actually changed instead of hiding it.
It didn’t feel like progress at the time. It felt like circling the problem, slowly narrowing the gap between intention and reality.
Pushing on What Became Possible
I wanted to push on the boundary of what it means to use AI to write code — and to use AI inside software as a real, dependable feature, especially for visual components or generated systems opinionated for people.
After twenty-five years working in IT, I never expected to see a tool like this in my lifetime. I spent most of my career building and operating the systems that make things like AI possible—networks, infrastructure, platforms that sit far below the surface. For most of that time, AI was abstract, distant, something other people researched.
This was different.
For the first time, there was a tool that could participate directly in making things—one that could shape interfaces, logic, and behavior. That changed how I thought about what software could be, and what responsibility comes with building it.
Each iteration clarified something new. Not just technically, but philosophically. What should be automated? What should remain explicit? Where does uncertainty belong, and where does it cause harm?
I stayed with those questions longer than was comfortable because they felt new, and because I don’t think we’ve really caught up to what this shift implies yet.
This wasn’t about chasing an opportunity. It was about understanding a moment I never expected to live through.
1. START: AI on Apple, using a LLM Schema → iOS Pamphlet Renderer for real people
When I started thinking about this it was the spring of 2024, I had never written Swift. I had never used Xcode. I didn’t know the difference between UIKit and SwiftUI, didn’t understand Apple’s build system, and had no intuition for how opinionated the platform really is.
That didn’t stop me. After a few years of Python and TypeScript under my belt I figured it wouldn't be hard to pick up because of what I could do with LLM's as a pair programmer.
AI had started to visibly displace people whose work I value — actors, illustrators, writers, service professionals — and I felt compelled to try something. Not because I had a plan, but because it felt irresponsible not to. I assumed, somewhat naively, that intent plus modern tools would be enough to close the gap.
The first idea was a “tar pit” iOS app: a dynamic pamphlet craigslist-like directory that could aggregate listings, profiles, and search for people trying to connect outside traditional platforms. A lightweight surface. Minimal friction. Something that could adapt quickly as needs changed.
I've never been a product manager before, and using AI to pick up that skill wasn't great. I couldn't land on an MVP and was always chasing new ideas, I think back and maybe it was the best thing that could've happened.
The core loop was simple on paper:
Intent → description → experience felt magical.
The idea was simple: say what you want in plain language, and the app turns it into a real experience—screens, motion, and interaction—without hand-building everything. No fixed layouts. No rigid flows. Just intent becoming something you could use.
In reality, the system pushed back hard.
Mobile platforms are strict by design. They expect control and predictability, and anything that tries to change itself on the fly tends to break loudly. Things crashed, behaved inconsistently, or worked once and never again.
I learned by breaking things. Repeatedly. Ship, fix, repeat.
Over time, I stopped fighting the environment and started understanding it—where flexibility was allowed, where it wasn’t, and why. I gained a real appreciation for Apple’s tools and constraints, not as obstacles, but as guardrails.
The lasting lesson was simple: flexibility only works when it’s contained. You can describe what you want freely, but execution has to be disciplined or everything falls apart.
I didn’t set out to become an Apple developer. I just stayed long enough to learn the rules well enough to ship real apps across iPhone, iPad, and Mac.
The first version didn’t survive. What it taught me did.
2. When Flexibility Met Reality
Early on, I believed AI could let people describe what they wanted and have software simply respond. The idea felt liberating: intent becomes experience, without translation loss.
Reality intervened quickly.
The environments where people actually live—phones, computers, operating systems—are built on discipline and predictability and those systems didn't know AI was a thing when they were created. Anything that invents itself on the fly breaks trust fast. Crashes, inconsistencies, and silent failures taught me the same lesson over and over:
Freedom without structure doesn’t empower people. It confuses and destabilizes them.
That tension—between expression and reliability—became the real problem to solve.
3. Turning Expression into Something Meaningful
To move forward, I had to separate what people say they want from how systems safely make it happen.
This led to the first major breakthrough: a stateless, declarative transformation engine. Instead of hard-coding outcomes, intent could be described once, then expanded into reliable behavior using known, constrained building blocks.
That work eventually crystallized into the first patent: Realtime Stateless Transofrmation of Human Intent
What mattered most wasn’t the name—it was the speed. Because the system isn't waiting for a ton of text, it can transform intent into behavior fast enough to feel immediate, like you can play with it and iterate it while you're using it. Not “eventually consistent.” Not “check back in a minute.” Real time.
That speed changes how people think and work. After years of building large systems, I’ve learned that waiting isn’t just a technical cost—it’s a cognitive one. People tolerate a token streaming or a brief pause, but they disengage when every change requires a long wait, a spinner, or another skeleton screen. Real-time iteration keeps curiosity alive. It lets people explore, adjust, and understand cause and effect without friction. That immediacy is what turns AI from something you request into something you work with.
What mattered wasn’t novelty—it was containment. People could express freely, while the system remained predictable and auditable and just worked.
4. Discovering the Ceiling of Prompt-Only Systems and the beginning of AI-based UX
I explored web-based tools and open systems next, hoping faster iteration would be enough. It helped—but only briefly.
Prompt-driven systems flatten quickly. They can generate impressive results, but they struggle with continuity. Motion drifts. Frame-to-Frame Timing changes across devices is achievable but hard to keep in sync with reality. Transitions feel different run to run. People notice. They may not articulate it, but they feel the instability immediately.
That’s when it clicked: interface motion isn’t decoration—it’s part of how people understand cause and effect. If transitions aren’t reliable, trust erodes. If timing isn’t consistent, the experience feels brittle no matter how clever the output is.
That realization led to another patent: Declarative, Context-Aware Animation and Transition Engine for Dynamic UX
The idea was simple but hard to execute: treat motion as intent, not scripts. Define what should happen, adapt it deterministically to context—accessibility settings, device performance, environment—and render it the same way every time. No guessing. No “best effort.” Just predictable, verifiable behavior.
This phase clarified something fundamental for me:
AI isn’t useful just because it can generate things. It’s useful when people can work through what it creates—revise it, trust it, and rely on it to behave the same way tomorrow as it does today.
5. Seeing the pattern from Years in the Cloud
My years working with large-scale cloud systems shaped what came next. I knew that in order to make video or voice conferencing, financial transactions, calendars and integration to the real world I would need those cloud API's.
Modern software isn’t one system—it’s many, spread across regions, providers, and layers of abstraction. AI dramatically reduces the effort required to connect these pieces, but it also magnifies the cost of mistakes if it does to much for you.
What became clear is that future software isn’t about writing more code. It’s about deciding what is allowed to happen, and under what conditions.
This phase connected everything: identity, permissions, actions, reversibility. AI could accelerate integration, but only if the system treated execution with the same care as infrastructure.
The goal shifted from building tools to building fast meaningful and trustworthy pathways from intent to action.
6. Controlling time and perception using machine learning
At some point, everything I’d been learning started pointing in the same direction.
I kept running into the same frustration: things looked impressive, but they didn’t feel right. Small delays broke momentum. Inconsistent motion made experiences feel fragile. Even when the results were technically correct, they didn’t earn trust.
That forced me to slow down and pay attention to what people actually experience.
I stopped thinking in terms of screens and features and started thinking in terms of perception. How quickly does something respond? Does it feel stable from one moment to the next? Does it behave the way you expect, even when the system underneath is doing a lot of work?
That shift changed everything.
I learned that creativity doesn’t disappear when you add constraints—it survives because of them. You can let ideas explore freely behind the scenes, but what people see has to arrive clearly, consistently, and on time. Content can be late. Confusion can’t be.
Following that line of thinking led to another patent: Deterministic Interactive Experience Execution with Asynchronous Content Preparation and Controlled State Progression
What I’m most interested in now is what comes next.
As models get better, they won’t just respond—they’ll anticipate. They’ll predict what might happen a moment from now, prepare pieces of an experience ahead of time, and hold them until the right moment to appear.
If this works the way I think it can, the system does its thinking in real-time and output FUTURE visuals based on input, but the experience unfolds in human time. Frames are prepared early, but revealed deliberately making it seem flawless. What you see arrives when it feels right — not when the machine happens to finish. We aren't that far away from this
That separation matters. Wall-clock time becomes an implementation detail. Perceived time becomes something you can shape. Experiences stop feeling reactive or jittery and start feeling calm, intentional, and responsive.
For people, this means software that feels less like a tool you wait on and more like something that moves with you. Something that keeps up without rushing you. Something that feels considered, not impatient.
That’s the future I’m leaning toward—not faster outputs, but better timing. Not more automation, but more care in how experiences arrive.
7. END: The worlds first distributed code compiler and execution engine
What finally clicked was this: I didn’t need AI to do everything at once.
Instead of generating huge, fragile structures up front, I could mix things differently. Deterministic systems could handle what needs to be stable. AI could contribute ideas, variations, and possibilities along the way. Code and rendered output could exist together, in motion, rather than as one big finished plan.
That combination opened up an entirely new design space.
When done right, it’s faster, not slower. More flexible, not more complex. You don’t wait minutes staring at a loading screen. Pieces arrive as they’re ready, in a way that feels natural and intentional.
That way of thinking led to my most recent patent: Distributed Compilation with Temporal Presentation State Control
The name is heavy, but the idea is simple. Let systems prepare in the background. Let AI explore possibilities. But only move forward when things are ready to be seen—at the right moment, for the right person.
For me, this wasn’t just a technical shift.
It was the point where the work stopped feeling like experimentation and started feeling like a glimpse of what software could become.
A Retrospective
There were multiple points where I could have stopped.
I could have shipped a base-hit product. Something commercially legible. Something that fit cleanly into the Apple App Store, raised money, found users, iterated, and exited on a timeline everyone understands.
I chose not to.
- As I learned more, it became harder to justify spending years optimizing for a distribution model that already feels boring. If it’s becoming possible to build hardware and systems that can compile and render any application — across form factors, environments, and constraints — then anchoring myself to a single marketplace started to feel like solving yesterday’s problem extremely well.
But I’ve already spent most of my life inside this field. I’ve watched entire waves come and go. And for the first time, I felt like I was brushing up against something that might actually change how software exists, not just how it’s packaged.
Earlier in my career, I worked on large-scale infrastructure projects—migrating data centers to the cloud, improving efficiency, optimizing systems. At the time, the work felt neutral, even positive. Only later did I fully understand the downstream human impact: displacement, abstraction, decisions made far away from the people they affected.
That awareness matters more to me than technical elegance. Perspective outweighs skill. I don’t want to spend the next chapter of my life building things that quietly increase harm just because they’re commercially convenient.
No one today will fund a scientific experiment the way AI systems actually need to be built. An open-ended, exploratory development effort—one where failure is expected, outcomes are unclear, and the value is learning rather than product — doesn’t fit the venture model. We’d never demand that kind of certainty from physics or medicine, yet we expect it from systems that are already reshaping society. I think that’s backwards.
If anything deserves patient, experimental investment, it’s the systems that will increasingly mediate how people work, create, and relate to one another.
What's next?
I didn’t plan to end up here.
What I see now is a different way forward for AI software—one where intent is described declaratively, ideas are prepared ahead of time, and experiences unfold in human time, not machine time. As models improve, they won’t just generate content; they’ll anticipate what might come next and help shape experiences that arrive calmly, predictably, and with care.
Looking back, the path felt scattered. In hindsight, it was consistent. Each constraint pushed me toward the same boundary: letting uncertain systems participate in making real things without letting them destabilize what people experience.
The work led to something I trust. The patents are filed. The prototypes hold up. What started as frustration with brittle tools became a foundation for thinking about how future systems might behave more responsibly.
This doesn’t feel finished. It feels like the beginning of a larger conversation — and I’m still following where it leads.