Wandering Through the Noise

Wandering fast. Thinking slow.

By Parker McKee


The Holy Grail of AI Moats: Intelligence Flywheels

Defintion: Intelligence flywheels are self-reinforcing loops where user interaction leads to learning, which improves performance, which increases usage, which leads to more interaction and learning. 

In 2001, during a visit to Columbia Business School, Warren Buffett was asked what accounted for his extraordinary success. Rather than offering a flashy theory, he simply held up a stack of papers and explained that he reads roughly 500 pages each week. The crucial point, though, wasn’t just about volume—it was about trajectory. “It’s like compounding knowledge,” he said.

Learning, like capital, compounds. The more you know, the easier it becomes to add new layers of understanding. It’s compounding—applied not to capital, but to knowledge. Knowledge builds on itself. And just like with investing, the biggest gains often show up later in the curve—when time and accumulation start working together.

Two years later, Bill Gurley of Benchmark Capital put forward a similar idea in a different domain. In a 2003 blog post, he described increasing marginal customer utility as “the nirvana of capitalism.” He observed that each incremental use—or new user—made platforms like eBay and Amazon more valuable, both to individuals and to the network. This ran counter to traditional economics, which holds that marginal utility should diminish with each use. Gurley saw the opposite happening in great digital businesses.

In both cases—Buffett with knowledge, Gurley with user value—we’re looking at systems that get stronger the more they’re used. 

As we drive into the AI-native era, things look a little different than the prior two decades. The next generation of dominant software companies might not rely on traditional network effects at all. Many of the early winners in AI look more like vertical SaaS businesses that are focused and domain-specific. Their moat doesn’t come from simply adding more users.

Many seasoned market participants are remarking that the only moat left is development speed (how quickly can humans ship new features).

I disagree, while human speed is critical, we are seeing the beginnings of a new, more powerful moat emerge. A moat that is both a combination of Buffett and Gurley’s insights.

This new moat is the holy grail for AI companies: not human speed, not scale, and not network effects—but Intelligence Flywheels. The idea that each user’s learnings can compound into a highly personalized N of 1 product just for them.

Historically these learning loops take place slowly with human product managers building least common denominator features for their broad user base. Generative AI makes it possible to build products that are fundamentally personalized on an individual user level. Code can be generated, models fine-tuned, and reasoning adapted— at the cost of compute (not human labor). The result is software that doesn’t just react to the user—it evolves with them. Done right, it absorbs their preferences, habits, workflows, strategy and learnings. Eventually, it doesn’t just serve the customer. It understands them.

And that’s where the moat forms. Companies that can capture and apply user insights—over and over—will build products that become irreplaceable. These tools won’t be static. They’ll grow more valuable with each interaction. Like a trusted colleague or top salesperson, they’ll know what to say and when to say it—not from a script, but from learned experience.

The strength of that advantage lies in how deeply the product learns and how effectively it turns that learning into improved performance. In that sense, personalization isn’t just a feature. It’s the flywheel. And it gets harder to compete with each turn.

What makes these systems defensible is not just personalization—it’s the cumulative learning, the embedded context, and the tight user integration that builds over time. Once a system has ingested months of nuanced user behavior, it becomes prohibitively expensive and operationally complex for competitors to match that performance.

To build this kind of defensibility, AI companies must rethink their metrics. The old standbys—DAUs, retention curves, time-on-site—are useful, but not sufficient. Instead, companies should be asking:

  • What percentage of customer insights do we capture?
  • What percentage of those insights do we relay back into product intelligence?
  • At what latency/cost are we able to execute each learning loop?
  • Is there a measurable link between that feedback loop and increased usage or satisfaction?

These aren’t just operational metrics. They’re strategic imperatives. Each user interaction is a chance to learn. The better a company is at turning interaction into intelligence—and intelligence into customization—the more their product becomes a part of the user’s workflow, and the harder it is to leave.

We’re already seeing this dynamic take shape. One of our portfolio companies, Abridge, is building intelligence flywheels in healthcare. Their product doesn’t just transcribe—it learns the vocabulary, cadence, and clinical nuance of each medical specialty, hospital system, and physician. The system can get sharper with every use.

Another example is Cursor, an AI-powered IDE gaining traction among software engineers. Cursor is in the early stages—focused on integrating into a developer’s unique codebase to provide context-aware suggestions. But it’s easy to see the trajectory: from autocomplete to understanding architecture, from syntax to style. Over time, Cursor could evolve into an engineering partner that feels native to you.

Of course, not every product is well-suited to this model. In some cases, the learning curve is shallow and the payoff modest. If a foundation model can handle 95% of the task out of the box, the remaining 5% might not justify the effort to personalize.

Poor fits for intelligence flywheels tend to include:

  • One-time-use or transactional tools—where little or no learning takes place.
  • Highly regulated workflows—where deviation is discouraged.
  • Fixed-function infrastructure—like APIs or databases—where standardization matters more than nuance.

On the other hand, ideal candidates for intelligence flywheels share common traits:

  • User preferences that evolve – requiring software to learn continuously to keep up.
  • High dimensionality – lots of variables, edge cases, and unknowns.
  • Creative or design-rich interfaces – where user taste and originality drive value.
  • High variance from the mean – where “one-size-fits-all” rarely performs well.

In some businesses, the ability to learn quickly and tailor solutions precisely isn’t just a competitive advantage—it is the moat. And the size of that moat tends to grow in proportion to the cost of forgetting. If repeating mistakes is expensive—or if overlooking a pattern creates real consequences—then having a system that remembers, adapts, and gets a bit smarter with each cycle becomes enormously valuable.

Buffett and Gurley have long appreciated systems that get better the more they’re used. When every interaction improves the output, and every mistake sharpens the model, you’re no longer just selling a product—you’re compounding capability.

That’s how you move from being helpful to becoming indispensable. Intelligence flywheels favor those who begin early, capture learnings aggressively, and compound those learnings at the highest rate.

Leave a comment