Five Rules for Finding Best-in-Class AI Enterprise Software
A practical framework for identifying durable, defensible AI enterprise software
Over the past 18 months, I’ve studied more AI enterprise software companies than I can count. Early on, every pitch felt exciting. Then the wave of trillion-dollar TAM slides, startup/microcap “AI disruptors,” and slick investor decks hit—and somewhere along the way I went from bullish, to bearish, to something closer to neutral.
What confused me most was the disconnect between theory and reality. Most of these AI enterprise software companies are essential business process outsourcing (BPO), the difference is that humans are replaced by AI. In theory, BPO is an ugly industry: hyper-competitive, low switching costs, and little differentiation except for scale. If everyone has equal access to AI, nothing should change—everyone can automate the same workflows with the same models. Eventually, competition would compress margin back to historical norms.
But in practice, a handful of AI-native operators—many still sub-scale—are winning real contracts and growing quickly. Why are some players breaking out while others stall? What’s durable, and what’s just early-mover noise?
Below is the five-rule framework I that distills my thinking so far. It is still evolving, and I’ll probably change my mind again. I admit the title is bit misleading. But “5 rules framework” is more appealing than “5 lessons subject to revision,” and it captures the patterns I see across the most promising operators.
None of these rules stand alone (great companies usually check several of them at once) nor do they replace valuation work. But they help separate durable business models from AI slop.
Rule 0 — Execution still matters
Before anything else: if a company can’t win customers or ship excellent products, the rest is irrelevant. A lot of the early “AI BPO winners” are probably just good operators riding temporary momentum. Without the other rules, that momentum won’t last.
Rule 1 — Own a mission-critical workflow with asymmetric cost of failure
The product must run a workflow where failure creates regulatory, legal, financial, or operational damage so severe that switching becomes irrational.
Cost of failure must be ≥10× the benefit of switching.
Common in financial reporting, compliance, settlement, legal filings, credit decisioning, and liability-sensitive workflows.
Also includes essential but non-core work customers must outsource (e.g., disclosure systems, registry lodgement, compliance controls).
Test: If the system vanished today, would the CEO/CFO/COO call an emergency meeting within 48 hours?
Rule 2 — Control the entire workflow, not just a feature or tool
The company must act as the orchestrator or operator of the process, not just a copilot or tool. It needs to deliver the outcome, not just enable it.
Must sit in the “system of record & action” layer. For example, AI to automate financial disclosure workflow would be intake → data cleansing → validation → document generation → submission → monitoring.
If Microsoft/SAP/Salesforce/Epic could bolt on the same feature and kill the business, it fails.
If removing the vendor doesn’t break the workflow, it’s just a tool.
Test: Can customers keep their existing platforms but remove this vendor without the workflow falling apart?
Rule 3 — AI improves unit economics, but is not the value proposition
AI should materially reduce variable labor and increase throughput, pushing the model toward SaaS-like margins—but customers shouldn’t buy it because it’s AI.
AI serves as a labor substitute and throughput multiplier.
The business should still make sense in a world where “AI” is not a pitch.
If differentiation disappears once competitors access similar models, the advantage was not real.
Test: In a world where everyone has the latest LLM, does this company still have superior economics because of where AI is embedded and what proprietary data it can learn from?
Rule 4 — Closed-loop proprietary data + process flywheel
Operating the workflow must generate feedback loops that improve automation accuracy and deepen switching costs.
Includes raw data (documents, filings, exceptions) and process knowledge (playbooks, escalation paths, regulator feedback).
Data must be workflow-native. This is something general-purpose AI models cannot replicate from public data.
Scale → More data captured → Improved product → More customers → More data captured → etc.
Test: Could a general-purpose model match performance without this vendor’s proprietary workflow data? If yes, the moat is weak.
Rule 5 — Small market share, reinvesting aggressively into the same workflow at high returns
The best companies are early in their penetration curve and reinvesting into the workflow they already dominate.
Look for high LTV/CAC, fast payback, strong customer retention, and high incremental ROIC.
Expansion should be depth-first (more value in the same workflow) or into adjacent areas with similar switching cost.
Test: Is every new dollar of R&D/S&M clearly aimed at strengthening the same mission-critical workflow?
At the end of the day, this isn’t a perfect or permanent framework, but it’s the clearest lens I’ve found to separate real, durable AI businesses from the noise. The gap between theory and practice in AI-enabled service business requires deep understanding of unit economics to reconcile.
The companies that matter will be the ones that anchor themselves to mission-critical workflows, build genuine moats, and let AI amplify their value. My hope is that these five rules make it a little easier to spot them.

