Blog Posts

Feb. 6, 2026: The Rise of the Alien Competitor

I believe we are staring at a Speciation Event in the corporate world. A chasm is opening between the "Old Guard" and a new breed of competitor. And this new breed isn't just faster than us.

In fact, to any traditional organization, they will look Alien.

Constraint Thinking

To understand why, we have to look at how we are trained. If you’ve ever taken a foundational computer science course (like CS50), you know we spend a massive amount of time learning to manage limitations. We obsess over memory allocation and code efficiency.

We are trained to find the "best" solution given the constraints of the tool

Whether it's the row limit of Excel, the memory of a single server, or the physical throughput of an instrument, we are conditioned to shrink the problem until it fits the box we have.

But what happens when the box disappears?  What happens when the tool is no longer a monolithic constraint, but an infinite resource?

An Alien Thought Experiment

What makes a company "Alien" is its ability to stop solving for Scarcity (how do I fit this problem into my tool?) and start solving for Abundance (what is the answer if the tool has no limits?).

Consider two thought experiments:

1. Molecular Development

2. Hardware Development

This brings us to the danger of over-indexing on Efficiency. Efficiency is vital—it keeps the lights on. But from a first-principles perspective, Optimization is the act of stripping away variance. You remove the noise and redundancy to make the machine run perfectly for its current environment.

In a static world, the Optimizer wins. But in a Singularity event—where the environment changes not over decades, but over weeks—Optimization is Over-fitting.

This is why large companies ossify around cost-reducing processes and are disrupted by startups that can experiment. If a company uses AI solely to "optimize"—to make current workflows frictionless—they are tuning themselves perfectly to a past that is disappearing.

The Physics of Profit

There is a financial reality to this that we need to appreciate.

Mutation (the Alien approach) attacks Gross Margin. It discovers new geometries, new proteins and molecules, and new markets.  It fundamentally extends the surface area of value creation. 

Efficiency attacks OpEx. It squeezes the metabolism to preserve the value we’ve already created.

We cannot cut OpEx into a new future. Optimization has a mathematical limit (zero cost). Value creation has no limit. The "Alien" company wins because they use Intelligence to radically expand the surface area of their Gross Margin in ways a human workflow never could.

Efficiency Subtracts. Mutation Multiplies. 

There is also a human implication here that we should be honest about.

The "Alien" company doesn't use AI to cut headcount. It uses AI to massively expand its surface area—to run more experiments, enter more markets, and test more "weird" ideas than a human organization could ever manage.

Escaping the Gravity Well 

The gravity of the corporate world pulls relentlessly toward Efficiency. It’s measurable. It’s safe. It fits in a quarterly review.

But the companies that choose the Efficiency path alone are choosing a slow fade. They are optimizing themselves into irrelevance against competitors who are playing a different game entirely. The gap will widen—slowly at first, and then more rapidly than can be overcome with any amount of investment, creating an inescapable event horizon.

To escape the gravity well of traditional efficiency, companies must implement the kind of radical variance that allows them to survive the speciation event.


/////////////////////////////////////////////////////////////////////////////////////////////////////////////////

Feb. 4, 2026: The Half-Life of Software: Why Innovation Belongs at the Edge

In my last post, I discussed "Data Intentionality." Today, I want to explore a conceptual shift that challenges our fundamental operational assumptions: The Half-Life of Software.

There was a watershed moment recently that should catch the attention of every enterprise leader. CNBC anchor Deirdre Bosa—who is not a developer—built a functional replacement for a major enterprise SaaS platform in about an hour using generative AI.

This signals a collapse in the unit economics of software. But the implication is deeper than just "coding is cheap."  

It suggests that the era of the "Application" as a capital asset is over.

From Monuments to ...

For the last twenty years, the corporate world has treated software like capital infrastructure. We built "monuments"—massive, centralized management tools such as ERPs and CRMs designed to last a decade. Because they were expensive to build, we had to centralize control, prioritize rigid roadmaps, and amortize the cost over years.

But biology—and business—doesn't work like that. Evolution doesn't happen in the center; it happens at the edge. It is messy, redundant, and hyper-reactive to the environment.

In the AI era, software is no longer a monument. It is a consumable. It is a temporary assembly of logic and data, spun up to solve a specific problem, and dissolved when that problem changes. We are entering an era of Disposable Software.

The "Domain" is the Moat

This is why the "Centralize and Prioritize" model is breaking. A centralized IT team, no matter how brilliant, cannot see the nuance of a supply chain bottleneck in Logistics or a pharmacokinetics anomaly in R&D. When a company forces those problems into a central queue, that company is effectively saying that their internal bureaucracy is more important than the market's reality.

As Bezos famously said, "Your margin is my opportunity."  Every friction point a company's employees face—every clumsy workaround, every manual spreadsheet—is margin leaking out of that company..  If companies don't give the Edge the tools to fix those leaks instantly, companies lose in the OODA (Observe, Orient, Decide, Act) loop.  This, when multiplied, becomes existential.

The Friction of the Old Model 

This explains why the traditional IT model feels so heavy right now. It isn't that IT is doing it wrong; it's that the model was designed for "Long Half-Life" assets.

When we force a "Short Half-Life" problem (a worker needs a quick data parser now) into a "Long Half-Life" queue (a 12-month IT roadmap), we create friction. We ask our domain experts—engineers, scientists, logistics managers, field solutions—to wait for a highway to be built when all they need is a path through the woods.

The Physics of the Edge

To move fast, companies need to recognize that Domain Knowledge is the new scarce resource, not coding ability. The nuance of a specific scientific or operational workflow lives at the edge.

Reframing IT: The Architecture of Trust

We may consider intentional shifts in roles, refreshing the value of key enterprise functions like IT.  This means a shift in roles. IT becomes the Enabler - a role that carries the heavy responsibilities of scalability and security, but with a new purpose:

To be successful, companies need to get comfortable with the idea that the most valuable software at the company next year might be a tool that doesn't exist today—and won't exist a month later.

Let's call it REDI: Rapid Empowered Distributed Innovation.

The Call to Courage There is a natural reflex to lock this down. To gravitate towards standardization, justification, and cost control. Those are valid business gates. Standardization feels safe. Chaos feels risky.

But companies need to get comfortable with a "Messy Edge." They need to empower their scientist—or a financial analyst—to spin up a bespoke AI workflow on Tuesday, solve a million-dollar inefficiency, and delete the tool on Friday.

We are moving into a world where competitive advantage comes from how quickly a company can turn "data exhaust" into domain-specific tools.  And every function - every action - of every employee - generates data, metadata, and workflows.  And it's huge - this innovation at the edge hits both topline growth and bottomline efficiencies.

In a world where software is disposable, the company that empowers its edge to build the fastest won't just be more efficient—it will be a different species of competitor entirely.


///////////////////////////////////////////////////////////////////////////////////////////////////////////////

Dec. 15, 2025: The Efficiency Trap: Why "Better, Faster, Cheaper" Isn't Enough for AI

Recently I've been musing on the directions emerging in enterprise AI.

When we look across industries, two distinct strategies seem to be emerging. The first is Automation for Efficiency. This is the operational play—using AI to optimize existing workflows, reduce manual toil, and protect current margins. It is necessary, tangible, and quantifiable.

The second is Augmentation for Innovation. This is the strategic play—using AI to illuminate insights or create differentiated new products, services, or customer capabilities that were previously out of reach or computationally impossible.

This creates a tension in Enterprise AI right now between protecting margins and creating value:

My feeling is that most industries naturally gravitate toward Automation because the ROI is easier to calculate. But Automation inevitably commoditizes. Innovation is where the moat is built. Because efficiency tools are largely accessible to everyone, they cease to be a source of durable competitive advantage. If everyone lowers their OPEX by 10% using the same LLMs, the playing field simply resets at a lower price point.

The real "alpha" lies in Innovation. It is harder to quantify and harder to execute, but it is the path that leads to new revenue, new markets, and new business models.

The "Catch"

Consider a hypothetical scenario in biotech: Could we use AI to make a "cheap" hardware perform like an "expensive" hardware?

Imagine we take a rapid, low-fidelity technique—like point-of-care iTLC. The data is messy and low-resolution. But what if we paired that with a massive training set of "Gold Standard" HPLC data? Could we train a model to look at the "messy" iTLC signal and predict the precise HPLC-grade result?

The answer is almost certainly yes. This concept—call it the "Virtual Instrument"—is a perfect example of AI Augmentation. It doesn't just make a workflow faster; it fundamentally changes the economics of discovery and testing by democratizing high-fidelity results.

The Problem of "Data Exhaust"

Historically, hardware and instrument industries treat data as "exhaust"—it is what comes out of the machine while the customer does the work. Companies sell the box; the customer keeps the data.

But companies like Palantir have shown that the real value often isn't in the algorithm (which is becoming a commodity), but in the ontology—the intentional structuring and connecting of data to reveal insights that were previously invisible.

But the business challenge is harder than the technical one. To build it, companies would need to be intentional about harvesting proprietary "Gold Standard" training data now, long before they know exactly how to monetize the result.

Data intentionality

This leaves us with a fascinating puzzle to solve: How do we build the business case for "Data Intentionality"? 

Traditional ROI models are great for efficiency—we know exactly what an hour of saved time is worth. But how do we value the creation of a proprietary dataset when the payoff is an innovation we haven't built yet? I think figuring this out is how companies move from simply adopting AI to leading with it.